Your Grant in Review: Scientific Premise

(by drugmonkey) Jul 03 2018

Scientific premise has become the latest headache of uncertainty in NIH grant crafting and review. You can tell because the NIH keeps having to issue clarifications about what it is, and is not. The latest is from Office of Extramural Research honcho Mike Lauer at his blog:

Clarifying what is meant by scientific premise
Scientific premise refers to the rigor of the prior research being cited as key support for the research question(s). For instance, a proposal might note prior studies had inadequate sample sizes. To help both applicants and reviewers describe and assess the rigor of the prior research cited as key support for the proposal, we plan to revise application instructions and review criteria to clarify the language.

Under Significance, the applicant will be asked to describe the strengths and weaknesses in the rigor of the prior research (both published and unpublished) that serves as the key support for the proposed project. Under Approach, the applicant will be asked to describe plans to address weaknesses in the rigor of the prior research that serves as the key support for the proposed project. These revisions are planned for research and mentored career development award applications that come in for the January 25, 2019 due date and beyond. Be on the lookout for guide notices.

My first thought was...great. Fan-friggin-tastic.

You are going to be asked to be more pointed about how the prior research all sucks. No more just saying things about too few studies, variances between different related findings or a pablum offer that it needs more research. Oh no. You are going to have to call papers out for inadequate sample size, poor design, bad interpretation, using the wrong parameters or reagents or, pertinent to a recent twitter discussion, running their behavioral studies in the inactive part of the rodent daily cycle.

Now I don't know about all of y'all, but the study sections that review my grants have a tendency to be populated with authors of papers that I cite. Or by their academic progeny or mentors. Or perhaps their tight science homies that they organize symposia and conferences with. Or at the very least their subfield collective peeps that all use the same flawed methods/approaches.

The SABV requirement has, quite frankly, been bad ENOUGH on this score. I really don't need this extra NIH requirement to be even more pointed about the limitations of prior literature that we propose to set about addressing with more studies.

2 responses so far

Journal Citation Metrics: Bringing the Distributions

(by drugmonkey) Jul 03 2018

The latest Journal Citation Reports has been released, updating us on the latest JIF for our favorite journals. New for this year is....

.....drumroll.......

provision of the distribution of citations per cited item. At least for the 2017 year.

The data ... represent citation activity in 2017 to items published in the journal in the prior two years.

This is awesome! Let's drive right in (click to enlarge the graphs). The JIF, btw is 5.970.

Oh, now this IS a pretty distribution, is it not? No nasty review articles to muck it up and the "other" category (editorials?) is minimal. One glaring omission is that there doesn't appear to be a bar for 0 citations, surely some articles are not cited. This makes interpretation of the article citation median (in this case 5) a bit tricky. (For one of the distributions that follows, I came up with the missing 0 citation articles constituting anywhere from 17 to 81 items. A big range.)

Still, the skew in the distribution is clear and familiar to anyone who has been around the JIF critic voices for any length of time. Rare highly-cited articles skew just about every JIF upward from what your mind things, i.e., that that is the median for the journal. Still, no biggie, right? 5 versus 5.970 is not all that meaningful. If your article in this journal from the past two years got 4-6 citations in 2017 you are doing great, right there in the middle.

Let's check another Journal....

Ugly. Look at all those "Other" items. And the skew from the highly-cited items, including some reviews, is worse. JIF is 11.982 and the article citation median is 7. So among other things, many authors are going to feel like they impostered their way into this journal since a large part of the distribution is going to fall under the JIF. Don't feel bad! Even if you got only 9-11 citations, you are above the median and with 6-8 you are right there in the hunt.

Final entry of the day:

Not too horrible looking although clearly the review articles contribute a big skew, possibly even more than the second journal where the reviews are seemingly more evenly distributed in terms of citations. Now, I will admit I am a little surprised that reviews don't do even better compared with primary review articles. It seems like they would get cited more than this (for both of these journals) to me. The article citation mean is 4 and the JIF is 6.544, making for a slightly greater range than the first one, if you are trying to bench race your citations against the "typical" for the journal.

The first takeaway message from these new distributions, viewed along with the JIF, is that you can get a much better idea of how your articles are fairing (in your favorite journals, these are just three) compared to the expected value for that journal. Sure, sure we all knew at some level that the distribution contributing to JIF was skewed and that median would be a better number to reflect the colloquial sense of typical, average performance for a journal.

The other takeaway is a bit more negative and self-indulgent. I do it so I'll give you cover for the same.

The fun game is to take a look at the articles that you've had rejected at a given journal (particularly when rejection was on impact grounds) but subsequently published elsewhere. You can take your citations in the "JCR" (aka second) year of the two years after it was published and match that up with the citation distribution of the journal that originally rejected your work. In the past, if you met the JIF number, you could be satisfied they blew it and that your article indeed had impact worthy of their journal. Now you can take it a step farther because you can get a better idea of when your article beat the median. Even if your actual citations are below the JIF of the journal that rejected you, your article may have been one that would have boosted their JIF by beating the median.

Still with me, fellow axe-grinders?

Every editorial staff I've ever seen talk about journal business in earnest is concerned about raising the JIF. I don't care how humble or soaring the baseline, they all want to improve. And they all want to beat some nearby competitors. Which means that if they have any sense at all, they are concerned about decreasing the uncited dogs and increasing the articles that will be cited in the JCR year above their JIF. Hopefully these staffs also understand that they should be beating their median citation year over year to improve. I'm not holding my breath on that one. But this new publication of distributions (and the associated chit chat around the campfire) may help with that.

Final snark.

I once heard someone concerned with JIF of a journal insist that they were not "systematically overlooking good papers" meaning, in context, those that would boost their JIF. The rationale for this was that the manuscripts they had rejected were subsequently published in journals with lower JIFs. This is a fundamental misunderstanding. Of course most articles rejected at one JIF level eventually get published down-market. Of course they do. This has nothing to do with the citations they eventually accumulate. And if anything, the slight downgrade in journal cachet might mean that the actual citations slightly under-represent what would have occurred at the higher JIF journal, had the manuscript been accepted there. If Editorial Boards are worried that they might be letting bigger fish get away, they need to look at the actual citations of their rejects, once published elsewhere. And, back to the story of the day, those actual citations need to be compared with the median for article citations rather than the JIF.

4 responses so far

Light still matters

(by drugmonkey) Jul 02 2018

In the midst of all this hoopla about reliability, repeatability, the replication crisis and what not the Editorial Board of the Journal of Neuroscience has launched an effort to recommend best practices. The first one was about electrophysiology. To give you a flavor:

There is a long tradition in neurophysiology of using the number of neurons recorded as the sample size (“n”) in statistical calculations. In many cases, the sample of recorded neurons comes from a small number of animals, yet many statistical analyses make the explicit assumption that a sample constitutes independent observations. When multiple neurons are recorded from a single animal, however, either sequentially with a single electrode or simultaneously with multiple electrodes, each neuron's activity may not, in fact, be independent of the others. Thus, it is important for researchers to account for variability across subjects in data analyses.

I emphasize the "long tradition" part because clearly the Editorial Board does not just mean this effort to nibble around the edges. It is going straight at some long used practices that they think need to change.

There was a long and very good twitter thread from someone which dealt in part with unreliability relating to when one chooses to conduct behavioral tasks in rodents with respect to their daily light cycle. As a reminder, rodents are nocturnal and are most active (aka "awake") in the dark. Humans, as a reminder, are not. So, as you might imagine, there is a lot of rodent research (including behavioral research) that fails to grasp this difference and simply runs the rats in their light cycle. Also known as their inactive part of the day. Aka. "asleep".

I am being totally honest when I say that the response has been astonishing to me. The pushback!

It's totally surprising that we not only got a lot of "it doesn't matter" responses but actually a lot of implication that it is better (without quite saying so directly). I'm not going to run down everything but players include @svmahler, @StephenMaren, @aradprof, @DennisEckmeier, @jdavidjentsch, and @sleepwakeEEG.

There are just too many ludicrous things being said to characterize them all. But, one species of argument is "it doesn't matter [for my endpoint]". The last part is implied. But early in this thread I posted a link to my prior post which discusses two of my favorite papers on this topic. Scheving et al, 1968 showed a four fold difference in mortality rate after a single dose of amphetamine depending on when it was administered. Roberts and colleagues showed that cocaine self-administration changes all across the day in a very nice circadian pattern. I also noted a paper I had discussed very indirectly in a post on contradicting your own stuff. Halberstadt and colleagues (2012) played around with some variables in a very old model from the Geyer lab and found that time of day interacted with other factors to change results in a rat locomotor assay. I mean c'mon, how many thousands of papers use locomotor assays to asssess psychomotor stimulant drugs?

There's some downshifting and muttering in the tweet discussion about "well if it doesn't matter who cares" but nobody has started posting published papers showing where light cycle doesn't matter for their assays (as a main factor or as an interaction). Yet. I'm sure it is just an oversight. Interestingly the tone of this seems to be arguing that it is ridiculous to expect people to do their rat assays in reverse light unless it is proven (I guess by someone else?) that it changes results.

This, my friends, is very much front and center in the "reproducibility crisis" that isn't. Let us return to the above comment at J Neuro about "long traditions". Do you know how hard it is to fight long traditions in scientific subareas? Sure you do. Trying to get funded, or publish resulting studies, that deal with the seemingly minor choices that have been made for a long time is very difficult. Boring and incremental. Some of these things will come out to be negative, i.e., it won't matter what light cycle is used. Good luck publishing those! It's no coincidence that the aforementioned Halberstadt paper is published in a very modest journal. So we end up with a somewhat random assortment of some people doing their work in the animals' inactive period and some in the active period. Rarely is there a direct comparison (i.e., within lab). So who knows what contribution that is....until you try to replicate it yourself. Wasting time and money and adding potential interactions.....very frustrating.

So yes, we would like to know it all, kind of like we'd like to know everything in male and female animals. But we don't. The people getting all angsty over their choice to run rodents in the light next tried the ploy to back and fill with "can't we all get along" type of approach that harmonizes with this sentiment. They aren't wrong, exactly. But let us return to the J Neuro Editorial effort on best practices. There IS a best option here, if we are not going to do it all. There's a slope in your choice of default versus checking the other. And for behavioral experiments that are not explicitly looking at sleepy rats or mice, the best option is running in their active cycle.

There is lots of fun ridiculousness in the thread. I particularly enjoyed the argument that because rats might be exposed briefly to light in the course of trying to do reverse-cycle experiments, we should just default to light cycle running. Right? Like if you walk from the light into a darkened room you suddenly fall asleep? Hell no. And if you are awakened by a light in your face in the middle of the night you are suddenly as awake as if it were broad noon? HAHAHHAHAA! I love it.

Enjoy the threads. Click on those tweeter links above and read the arguments.

__
Roberts DC, Brebner K, Vincler M, Lynch WJ. Patterns of cocaine self-administration in rats produced by various access conditions under a discrete trials procedure. Drug Alcohol Depend. 2002 Aug 1;67(3):291-9. [PubMed]

Scheving LE, Vedral DF, Pauly JE. Daily circadian rhythm in rats to D-amphetamine sulphate: effect of blinding and continuous illumination on the rhythm. Nature. 1968 Aug 10;219(5154):621-2. [PubMed]

9 responses so far

Preprint Ratio

(by drugmonkey) Jun 30 2018

We're approximately one year into the NIH policy that encouraged deposition of manuscripts into preprint servers. My perception is that the number of labs taking the time to do so is steadily increasing.

It is rather slow compared to what I would have expected, going by the grant applications I have reviewed in the past year.

Nevertheless, preprint deposition is getting popular enough that the secondary questions are worth discussing.

How many preprints are too many?

Meaning, is there a ratio of preprints to now-published-in-Journalofology preprints that is of concern?

It is sort of like the way I once viewed listing conference abstracts on a CV. It's all good if you can see a natural progression leading up to eventual publication of a paper. If there are a lot of conference presentations that never led to papers then this seems....worrisome?

So I've been thinking about how preprints may be similar. If one has a ton of preprints that never ever seen to get published, this may be an indication of certain traits. Bad traits. Inability to close type of traits.

So I have been thinking that one of the things guiding my preprint behavior is how many my lab has at a given time that have not advanced to publication yet. And maybe there are times when waiting to upload more preprints is advisable.

Thoughts, Dear Reader?

16 responses so far

Faculty Retention: The Rule of 10

(by drugmonkey) Jun 21 2018

There's a thread on faculty retention (or lack thereof, really) on the twitts today:

I know this is a little weird for the segments of my audience that are facing long odds even to land a faculty job and for those junior faculty who are worried about tenure. Why would a relatively secure Professor look for a job in a different University? Well.....reasons.

As is typical, the thread touched on the question of why Universities won't work harder in advance to keep their faculty so delightfully happy that they would never dream of leaving.

Eventually I mentioned my theory of how Administration views retention of their faculty.

I think Administration (and this is not just academics, I think this applies pretty broadly) operates from the suspicion that workers always complain and most will never do anything about it. I think they suppose that for every 10 disgruntled employees, only 5 will even bother to apply elsewhere. Of these maybe three will get serious offers. Ultimately only one will leave*.

So why invest in 10 to keep 1?

This, anyway, is what I see as motivating much of the upper management thinking on what appear to be inexplicably wasteful faculty departures.

Reality is much more nuanced.

I think one of the biggest mistakes being made is that by the time a last-ditch, generally half-arsed retention ploy is attempted it can be psychologically too late. The departing faculty member is simply too annoyed at the current Uni and too dazzled by the wooing from the new Uni to let any retention offer sway their feelings. The second biggest mistake is that if there is an impression created that "everybody is leaving" and "nobody is being offered reasonable retention" this can spur further attempts to exit the building before the roof caves in.

Yes, I realize some extremely wealthy private Universities all covered in Ivy have the $$ to keep all their people happy all of the time. This is not in any way an interesting case. Most Universities have to be efficient. Spending money on faculty that are going to stay anyway may be a waste, better used elsewhere. Losing too many faculty that you've spent startup costs on is also inefficient.

So how would you strike the right balance if you were Dean at a R1 University solidly in the middle of the pack with respect to resources?
__
*Including by method of bribing one or more of the "serious offers" crowd to stay via the mysteries of the RetentionPackageTM

17 responses so far

On PO/PI interactions to steer the grant to the PI's laboratory

(by drugmonkey) Jun 18 2018

There has been a working group of the Advisory Committee to the Director (of NIH, aka Francis Collins) which has been examining the Moderate Alcohol and Cardiovascular Health Trial in the wake of a hullabaloo that broke into public earlier this year. Background on this from Jocelyn Kaiser at Science, from the NYT, and the WaPo. (I took up the sleazy tactics of the alleged profession of journalism on this issue here.)

The working group's report is available now [pdf].

Page 7 of that report:

There were sustained interactions (from at least 2013) between the eventual Principal Investigator (PI) of the MACH trial and three members of NIAAA leadership prior to, and during development of, FOAs for planning and main grants to fund the MACH trial

These interactions appear to have provided the eventual PI with a competitive advantage not available to other applicants, and effectively steered funding to this investigator

Page 11:

NIH Institutes, Centers, and Offices (ICOs) should ensure that program staff do not inappropriately provide non-public information, or engage in deliberations that either give the appearance of, or provide, an advantage to any single, or subset of, investigator(s)

The NIH should examine additional measures to assiduously avoid providing, or giving the appearance of providing, an advantage to any single, or subset of, investigator(s) (for example, in guiding the scientific substance of preparing grant applications or responding to reviewer comments)

The webcast of the meeting of the ACD on Day 2 covers the relevant territory but is not yet available in archived format. I was hoping to find the part where Collins apparently expressed himself on this topic, as described here.

In the wake of the decision, Collins said NIH officials would examine other industry-NIH ties to make sure proper procedures have been followed, and seek out even “subtle examples of cozy relationships” that might undermine research integrity.

When I saw all of this I could only wonder if Francis Collins is familiar with the RFA process at the NIH.

If you read RFAs and take the trouble to see what gets funded out of them you come to the firm belief that there are a LOT of "sustained interactions" between the PO(s) that are pushing the RFA and the PI that is highly desired to be the lucky awardee. The text of the RFAs in and of themselves often "giv(e) the appearance of providing, and advantage to any single, or subset of, investigator(s)". And they sure as heck provide certain PIs with "a competitive advantage not available to other applicants".

This is the way RFAs work. I am convinced. It is going to take on huge mountain of evidence to the contrary to counter this impression which can be reinforced by looking at some of the RFAs in your closest fields of interest and seeing who gets funded and for what. If Collins cares to include failed grant applications from those PIs that lead up to the RFA being generated (in some cases) I bet he finds that this also supports the impression.

I really wonder sometimes.

I wonder if NIH officialdom is really this clueless about how their system works?

...or do they just have zero compunction about dissembling when they know full well that these cozy little interactions between PO and favored PI working to define Funding Opportunity Announcements are fairly common?

__
Disclaimer: As always, Dear Reader, I have related experiences. I've competed unsuccessfully on more than one occasion for a targeted FOA where the award went to the very obvious suspect lab. I've also competed successfully for funding on a topic for which I originally sought funding under those targeted FOAs- that takes the sting out. A little. I also suspect I have at least once received grant funding that could fairly be said to be the result of "sustained interactions" between me and Program staff that provided me "a competitive advantage" although I don't know the extent to which this was not available to other PIs.

10 responses so far

Twitter Cloud

(by drugmonkey) Jun 16 2018

Sounds just about right

2 responses so far

The culture of "the lab that socializes together" enables the predators

(by drugmonkey) Jun 14 2018

There is a cautionary tale in the allegations against three Dartmouth Professors who are under investigation (one retired as a Dean reached a recommendation to fire him) for sexual harassment, assault and/or discrimination. From The Dartmouth:

several students in the PBS department described what they called an uncomfortable workplace culture that blurred the line between professional and personal relationships.

Oh, hai, buzzkill! I mean it's just normal socializing. If you don't like it nobody is forcing you to do it man. Why do you object to the rest of us party hounds having a little fun?

They said they often felt pressured to drink at social events in order to further their professional careers, a dynamic that they allege promoted favoritism and at times inappropriate behavior.

The answer is that this potential for nastiness is always lurking in these situations. There are biases within the laboratory that can have very lasting consequences for the trainees. Who gets put on what projects. Who gets preferential resources. Who is selected to attend a fancy meeting with a low trainee/PI ratio? Who is introduced around as the amazing talented postdoc and who is ignored? This happens all the time to some extent but why should willingness (and ability, many folks have family responsibilities after normal working hours) to socialize with the lab affect this?

Oh, come on, buzzkill! It's just an occasional celebration of a paper getting accepted.

Several students who spoke to The Dartmouth said that Kelley encouraged his lab members to drink and socialize at least weekly, often on weeknights and at times during business hours, noting that Whalen occasionally joined Kelley for events off-campus.

Or, you know, constantly. Seriously? At the very least the PI has a drinking problem* and is covering it up with invented "lab" reasons to consume alcohol. But all too often it turns sinister and you can see the true slimy purpose revealed.

At certain social events, the second student said she sometimes refused drinks, only to find another drink in her hand, purchased or provided by one of the professors under the premise of being “a good host.”

Yeah, and now we get into the area of attempted drug-assisted sexual assault. Now sure, it could just be the PI thinking the grad student or postdoc can't afford the drinks and wants to be a good chap. It could be. But then.....

She described an incident at a social event with members of the department, at which she said everyone was drinking, and one of the professors put his arm around her. She said his arm slid lower, to the point that she was uncomfortable and “very aware of where his hand [was] on [her] body,” and she said she felt like she was being tested.

Ugh. The full reveal of the behavior.

Look, as always, there is a spectrum here. The occasional lab celebration that involves the consumption of alcohol, and the society meeting social event that involves consumption of alcohol, can be just fine. Can be. But these traditions in the academic workplace are often co-opted by the creeper to his own ends. So you can end up with that hard-partying PI who is apparently just treating his lab like "friends" or "family" and belives that "everyone needs to blow off steam" to "build teamwork" and this lets everyone pull together....but then the allegations of harassment start to surface. All of the "buddies" who haven't been affected (or more sinisterly have been affected for the good) circle the wagons.
Bro 1: Oh, he's such a good guy.
Bro 2: Why are you being a buzzkill?
Bro 3: Don't you think they are misinterpreting?

He isn't, because people are being harmed and no, the victims are not "misinterpreting" the wandering arm/hand.

Keep a tight rein on the lab-based socializing, PIs. It leads to bad places if you do not.

__
*And that needs to be considered even when there is not the vaguest shred of sexual assault or harassment in evidence.

16 responses so far

Plea bargains are unsatisfying to their vict.... wait, again?

(by drugmonkey) Jun 14 2018

There has been a case of sexual harassment, assault and/or workplace misconduct at Dartmouth College that has been in the news this past year.

In allegations that span multiple generations of graduate students, four students in Dartmouth’s department of psychological and brain sciences told The Dartmouth this week that three professors now under investigation by the College and state prosecutors created a hostile academic environment that they allege included excessive drinking, favoritism and behaviors that they considered to be sexual harassment.

It was always a little bit unusual because three Professors from the same department (Psychological and Brain Sciences) were seemingly under simultaneous investigation and the NH State AG launched an investigation at the same time. It is not all clear to me yet but it seems to be a situation in which the triggering behaviors are not necessarily linked.

The news of the day (via Valley News) is that one of the professors under investigation has retired, "effective immediately".

Professor Todd Heatherton has retired, effective immediately, following a recommendation by the dean of the faculty of arts and sciences, Elizabeth Smith, that his tenure be revoked and that he be terminated, Hanlon said in the email.

“In light of the findings of the investigation and the dean’s recommendation, Heatherton will continue to be prohibited from entering campus property or from attending any Dartmouth-sponsored events, no matter where they are held,” Hanlon wrote.

This comes hard on the heels of Inder Verma retiring from the Salk Institute just before their institutional inquiry was set to conclude.

I understand the role of plea bargains in normal legal proceedings. I am not sure I understand the logic of the approach when it comes to busting sexual harasser/discriminater individuals in academia. I mean sure, it may avoid a protracted legal fight between the alleged perpetrator and the University or Institute as the former fights to retain a shred of dignity, membership in the NAS or perhaps retirement benefits. But for the University or Institute, in this day and age of highly public attention they just like they are, yet again, letting a perp off the hook*. So any fine statements they may have made about taking sexual discrimination seriously and having zero tolerance rings hollow. I am mindful that what we've seen in the past is that the Universities and Institutes are fully willing to deploy their administrative and legal apparatus to defend an accused perpetrator, often for years and in repeated incidents, when they think it in their interest to do so. So saving money can't really be the reason. It really does seem to be further institutional protection- they cannot be accused of having admitted to defending and harboring the perp over the past years or decades of his harassing behavior.

It is all very sad for the victims. The victims are left with very little. There is no formal finding of guilt to support their allegations. There is often no obvious punishment for a guy who should probably have long since retired (Verma is 70) simply retiring. There is not even any indirect apology from the University or Institution. I wish we could do better.

__
*At least in the Verma case, the news reporting made it very clear that the Salk Board of Trustees formally accepted Verma's tender of resignation which apparently then halted any further consideration of the case. They could have chosen not to accept it, one presumes.

2 responses so far

Plea bargains are unsatisfying to the victims of their type of crime

(by drugmonkey) Jun 11 2018

Inder Verma has resigned his position at the Salk Institute before a formal conclusion was reached in their internal investigation. One can only imagine they were moving toward a finding of guilt and he was tipped to resign.

http://www.sciencemag.org/news/2018/06/leading-salk-scientist-resigns-after-allegations-harassment

5 responses so far

« Newer posts Older posts »