Review your own CV. Frequently.

(by drugmonkey) Apr 14 2015

Yesterday's review of the research publications of a person who had written about closing down his lab due to lack of funding in this "unfair" grant award environment touched a nerve on at least one Reader. I assume it was uncomfortable for many of you to read.

It was uncomfortable for me to write.

You can tell because I felt compelled to slip in the odd caveat about my own record. I can write one of those reviews about my own career that would be equally, if not more, critical and uncomfortable.

No doubt more than one of you got the feeling that if I wrote a similar review of your record you would come up wanting ...or at least PhysioProffe would jump in to tell you how shitasse you are*. Certainly at least one correspondent expressed this feeling.

But that tinge of anxiety, fear and possibly shame that you feel should tell you that it is a good idea to perform this little review of yourself now and again. Good to try to step outside of your usual excuses to yourself and see how your CV looks to the dispassionate observer who doesn't know anything about your career other than the publication and NIH-award (or other grants, as relevant) record.

Do you have obvious weaknesses? Too few publications? Too few first/last author (as appropriate). Too few collaborations? Insufficiently high Journal Impact Factor points? Etc.

What is all of this going to say to grant reviewers, hiring committees or promotions committees?

Then, this allows you to do something about it. You can't change the past but you can alter the course of your future.

In some situations, like crafting the NIH Biosketch Personal Statement, you do actually have the opportunity to alter the past....not the reality but certainly the perception of it. So that is another place where the review of your CV helps. That voice of excuse-making that arises? Leverage that. You DO have reasons for certain weaknesses and perhaps other features of your CV help to overcome that if they are just pointed out properly.

___
*he wouldn't, btw.

32 responses so far

On productivity and the "unfair" grant funding game

(by drugmonkey) Apr 13 2015

There is an article up on ASBMB Today by Andrew D. Hollenbach that laments the shut-down of his research program. In The reality that dare not speak its name we learn:

It was the day after my lab manager left, forced to find a new job by a vicious funding environment that took a trusted employee and friend from me and shut down my research program.

This is terrible, I will acknowledge. I have feared this outcome for my own research program, only briefly interrupted, for my entire independent career. The wolves are always near the door and winter is most certainly coming.

Hollenbach finds this to be unfair. And that assertion triggers slightly more thought than mere sympathy and empathetic butt clenching.

I spent 20 years studying the mechanisms underlying a childhood muscle tumor. I published more than 20 articles with a lab of no more than three people at one time, intentionally kept small so I could focus on mentoring. I established a new paradigm in my field, identified viable therapeutic targets and trained five students (three of whom went to Harvard University for postdocs). I am recognized worldwide for my research.

You would think that all of that would be enough to bring in money and continue my research. But it’s not.

My immediate thought was no, no I don't think that is enough in this day and age. 20 papers in 20 years of an independent career is not a fantastic publishing rate. Of course, yes, there are going to be field and model specifics that greatly affect publishing rate. There will be differences in publishing style and venue as well...if this had been 20 CNS publications, well, this would be pretty good productivity. But a search of PubMed seems to confirm that the pursuit of the very highest Glamour publications was not the issue. I am not an expert in this guy's field of study but glancing over his publication titles and journals I get the distinct impression of a regular-old Jane/Joe type of scientist here. Many people can claim to have established new paradigms, sent trainees off to impressive-sounding postdoctoral stints (or assistant professorships) and to have identified 'viable' therapeutic targets. I say this not to belittle the guy but to point out that this is not in any way special. It is not an immediately obvious compensation for a rather underwhelming rate of publication. For a PI, that is, who asserts he's had a long-term lab manager and up to three people in his group at a time.

Hollenbach's funding hasn't been overwhelmingly generous but he's had NIH grants. RePORTER shows that he started with a component of a P20 Center grant from 2004-2009 and an R01 from 2009-2013.

Wait. What "20 years"?

Hollenbach's bio claims he was made junior faculty in 2001 and won his first Assistant Professor job in 2003. This matches up better with his funding history so I think we'd better just focus on the past 10 years to really take home a message about careerism. One senior author publication in 2003 from that junior-faculty stint and then the next one is 2007 and then three in 2008. So far, so good. Pretty understandable for the startup launch of a new laboratory.

Then we note that there is only one paper in each of 2009 and 2010. Hmmm. Things can happen, sure. Sure. Two papers in 2011 but one is a middle authorship. One more publication in each of 2012, 2014 and 2015 (to date). The R01 grant lists 7 pubs as supported but two of those were published before the grant was awarded and one was published 9 months into the first funding interval. So 5 pubs supported by the R01 in this second phase. And an average as a faculty member that runs just under a publication per year.

Lord knows I haven't hit an overwhelming publication output rate across my entire career. I understand slowdowns. These are going to happen now and again. And for certain sure there are going to be chosen model systems that generate publishable datasets more slowly than others.

But.

But.....

One paper per year, sustained across 10 years, is not the kind of productivity rate that people view as normal and average and unremarkable. Particularly when it comes to grant review at the NIH level.

I would be very surprised if the grant applications this PI has submitted did not receive a few comments questioning his publication output.

Look at my picture, and you will not see a failure. You will see someone who worked hard, excelled at what he did, held true to himself and maintained his integrity. However, you also will see someone whose work was brought to a halt by an unfair system.

Something else occurs to me. The R01 was funded up to March 2013. So this presumably means that this recent dismissal of the long-term lab manager comes after a substantial interval of grant submission deadlines? I do wonder how many grant applications the guy submitted and what the outcomes were. This would seem highly pertinent to the "unfair system" comment. You know my attitude, Dear Reader. If one is supported on a single grant, bets the farm on a competing continuation hitting right on schedule and is disappointed...this is not evidence of the system being unfair. If a PI is unfunded and submits a grant, waits for the reviews, skips a round, submits the revision, waits for the reviews, skips another round, writes a new proposal..... well, THIS IS NOT ENOUGH! This is not trying. And if you are not trying, you have no right to talk about the "unfair system" as it applies to your specific outcome.

I close, as I often do, with career advice. Don't do this people. Don't let yourself publish on the lower bound on what is considered an acceptable rate for your field, approaches, models and, most importantly, funding agency's review panels.

PS: This particular assertion regarding what surely must be necessary to survive as a grant-funded is grotesquely inaccurate.

Some may say that I did not do enough. Maybe I didn’t. I could have been a slave-driving mentor to get more publications in journals with higher impact factors. I could have worked 80-hour weeks, ignoring my family and friends. I could have given in to unfettered ambition, rolling over anyone who got in my way.

98 responses so far

A new Stock Criticism for NIH Grants

(by drugmonkey) Apr 11 2015

I have decided to deploy a brand new Stock Criticism.

 "No effort for a staff scientist is described, which may limit progress."

Feel free to borrow it.

5 responses so far

U Maryland faculty want to decrease postdoc benefits to stay competitive

(by drugmonkey) Apr 08 2015

from Andrew Dunn, staffwriter at The Diamondback:


A group of life sciences professors and administrative chairs appeared before the Faculty Affairs Committee yesterday to discuss adding a second title for postdoctoral students and professionals in temporary positions. The only employee title currently available for postdoctoral students at this university is “postdoctorate research assistant,” which classifies all postdoctoral students as non-tenure-track faculty.

huh, that actually sounds pretty good. What's the problem?

The life sciences programs collected research on postdoctoral students from the other Big Ten schools and found that out of the 13 other schools in the conference, eight offer health benefits and two offer retirement benefits. The only other schools to offer retirement benefits are Northwestern University and Indiana University, which both have significantly higher funding for life sciences.

Jonathan Dinman, professor and cell biology and molecular genetics department chairman, said the current academic environment requires this new title for postdoctoral students for this university to stay competitive and on track with fellow Big Ten schools.

Ahh yes. The cry of every labor exploiter since forever. "We must screw the humblest, least-paid workers to stay competitive"!!!

37 responses so far

McKnight posts an analysis of NIH peer review

(by drugmonkey) Apr 08 2015

Sortof.

In his latest column at ASBMB Today, Steve McKnight attempts to further his assertion that peer review of NIH grants needs to be revamped so that more qualified reviewers are doing the deciding about what gets funded.

He starts off with a comment that further reveals his naivete and noobitude when it comes to these issues.

Reviewers judge the application using five criteria: significance, investigator, innovation, approach and environment. Although study sections may weigh the importance of these criteria to differing degrees, it seems to me that feasibility of success of the proposed research plan (approach) tends to dominate. I will endeavor to provide a quantitative assessment of this in next month’s essay.

The NIH, led by then-NIGMS Director Berg, already provided this assessment. Ages ago. Try to keep up. I mention this because it is becoming an obvious trend that McKnight (and, keep in mind, many of his co-travelers that don't reveal their ignorance quite so publicly) spouts off his ill-informed opinions without the benefit of the data that you, Dear Reader, have been grappling with for several years now .

As reported last month, 72 percent of reviewers serving the HHMI are members of the National Academy of Sciences. How do things compare at the NIH? Data kindly provided by the CSR indicate that there were 7,886 reviewers on its standing study sections in 2014. Evaluation of these data reveals the following:

48 out of 324 HHMI investigators (15 percent) participated in at least one study section meeting.
47 out of 488 NIH-funded NAS members (10 percent) participated in at least one study section meeting.
11 of these reviewers are both funded by HHMI and NAS members.

These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.

This tells us nearly nothing of importance. How many investigators from other pertinent slices of the distribution serve? ASBMB members, for example? PIs from the top 20, 50, 100 funded Universities and Medical Schools? How many applications do NAS / HHMI investigators submit each year? In short, are they over- or under-represented in the NIH review system?

Anyway, why focus on these folks?

I have focused on the HHMI investigators and NAS members because it is straightforward to identify them and quantify their participation in the review process. It is my belief that HHMI investigators and NIH-funded members of the NAS are substantively accomplished. I readily admit that scientific accomplishment does not necessarily equate to effective capacity to review. I do, however, believe that a reasonable correlation exists between past scientific accomplishment and capacity to choose effectively between good and poor bets. This contention is open for debate and is — to me — of significant importance.

So confused. First, the supposed rationale that these elite scientists are readily discernible folks amongst a host of well qualified so that's why he has used them for his example, aka the Street Lamp excuse. Next we get a ready admission that his entire thesis he's been pursuing since the riff-raff column is flawed, followed immediately by a restatement of his position based on..."belief". While admitting it is open to debate.

So how has he moved the discussion forward? All that we have at this point is his continued assertion of his position. The data on study section participation do exactly nothing to address his point.


Third, it is clear that HHMI investigators and NIH-funded members of the NAS participate in study sections charged with the review of basic research to a far greater extent than clinical research. It is my belief that study sections involving HHMI investigators and NAS members benefit from the involvement of highly accomplished scientists. If that is correct, the quality of certain basic science study sections may be high.

Without additional information this could be an entirely circular argument. If HHMI and NAS folks are selected disproportionally for their pursuit of basic science (I believe they are, Professor McKnight. Shall you accept my "belief" as we are expected to credit yours? or perhaps should you have looked into this?) they of course they would be disproportioanlly on "basic" study sections. If only there were a clinically focused organization of elite good-old-backslappers-club folks to provide a suitable comparison of more clinically-focused scientists.

McKnight closes with this:

I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. In subsequent essays, I will offer ideas as to how the NIH system of grant award distribution might be altered to meet this goal.

Nope. We want the funding to go to the most important science. Within those constraints we want the funding to go to highly qualified scientists but we recognize that "the most qualified" is a fool's errand. Other factors come in to play. Such as "the most qualified who are not overloaded with other research projects at the moment". Or, "the most qualified who are not essentially carbon copies of the three other folks funded in similar research at the moment".

This is even before we get into the very thorny argument over qualifications and how we identify the "most" qualified for any particular purpose.

McKnight himself admits to this when he claims that there are lots of other qualified people but he selected HHMI/NAS out of mere convenience. I wonder if it will eventually trickle into his understanding that this mere convenience pollutes his entire thinking on this matter?

h/t: philapodia

48 responses so far

Another GlamourMag Data Faker is Busted by ORI

(by drugmonkey) Apr 07 2015

In the Federal Register:

Ryousuke Fujita, Ph.D., Columbia University: Based on the report of an investigation conducted by Columbia University (CU) and additional analysis conducted by ORI in its oversight review, ORI found that Dr. Ryousuke Fujita, former Postdoctoral Scientist, Taub Institute for the Aging Brain, Departments of Pathology and Cell Biology and Neurology, CU Medical Center, engaged in research misconduct in research supported by National Institute of Neurological Disorders and Stroke (NINDS), National Institutes of Health (NIH), grant R01 NS064433 and National Institute of Aging (NIA), NIH, grant R01 AG042317.

ORI found that Respondent engaged in research misconduct by falsifying and fabricating data for specific protein expressions in human-induced neuronal (hiN) cells derived skin fibroblasts of Alzheimer's disease patients and unaffected individuals in seventy-four (74) panels included in figures in the following two (2) publications and one (1) unpublished manuscript:

Wow. 74 panels faked in a mere three papers? One wonders how many valid panels could possibly be left.

So what are the papers?

Nature. 2013 Aug 1;500(7460):45-50. doi: 10.1038/nature12415. Epub 2013 Jul 24.
Integrative genomics identifies APOE ε4 effectors in Alzheimer's disease.
Rhinn H, Fujita R, Qiang L, Cheng R, Lee JH, Abeliovich A. [PubMed]

Nature eh? Glamour number one. And I note that this busy bee faker is listed-second with a co-equal symbol. No evidence on the publisher site that this has been retracted that I can see.

Cell. 2011 Aug 5;146(3):359-71. doi: 10.1016/j.cell.2011.07.007.
Directed conversion of Alzheimer's disease patient skin fibroblasts into functional neurons.
Qiang L, Fujita R, Yamashita T, Angulo S, Rhinn H, Rhee D, Doege C, Chau L, Aubry L, Vanti WB, Moreno H, Abeliovich A. [PubMed]

Cell. Glamour two. In this case the retraction notices are all over the place. Once again, the faker is listed-second with a co-equal contributor symbol.

Fujita had a very impressive number of cheating techniques that were deployed. This seems slightly unusual...my memory suggests cheaters often focus on one or two strategies*.

Respondent inflated sample numbers and data, fabricated numbers for data sets, manipulated enzyme-linked immunosorbent assay (ELISA) analysis, mislabelled immunoflourescent confocal images, and manipulated and reused Western blot images.

h/t: Comradde PhysioProffe
__
*I could be wrong about this.

20 responses so far

The golden rule of peer review

(by drugmonkey) Apr 04 2015

Review as you would wish to be reviewed.

14 responses so far

Critique writing: Manuscript review

(by drugmonkey) Apr 02 2015

challdreams wrote on rejection.

 These things may or may not be part of your personal life, where rejection rears its head at times and you are left to deal with the fall out. And that type of rejection is seldom based on "your writing" but rather on "you as a person" or "things you did", which is a little more personal and a little harder to 'accept and get back on the horse'.

It made me think of how I try to write criticisms of manuscripts that focus on the document in front of me. The data provided and the interpretations advanced. The hypothesis framed.

I try to write criticisms about whether  the data as presented do and do not support the claims. The claims as advanced by the authors. This keeps me away, I hope, from saying the authors are wrong, that their experimental skills are deficient or that they are stupid.

It can be all a matter of phrasing because often the authors hear "you are stupid" when this is not at all what the reviewer thinks she is saying. 

22 responses so far

This is who is leading the fight for your future in science

(by drugmonkey) Mar 27 2015

Tweep @MHendr1cks is killing it. The latest.

The PI R01 age distribution looks like the 2010 one from this PPT file.

The "Jedi Council" is, I believe, the ages of the participants in a 2 day workshop convened by Alberts, Kirschner, Tilghman and Varmus as detailed here (see Acknowledgements).

To make this even more interesting, we can look at the 1980-2013 distributional overlay slide.
NIH1980-2013R01AgePIs

In 1980 the 35-40 year old PI demographic was the immediate pre-Boomer generation but oh, just wait. Stepping forward to 1986 we see...
NIH-1986

another little bump. 1986 minus 40 equals the post-WWII definitinal start of the Boomers. These slides illustrate why strict generational definitions are only roughly accurate...so no need to get too fussed about those precise age ranges. Suffice it to say if you were born between about 1940 and 1953 you were in the awesomely lucky zone. Look at how the shoulder in the distribution at age 35 drops off right around 1988-1990 in the slide deck.

61 responses so far

Grants won vs grants awarded on your CV

(by drugmonkey) Mar 26 2015

Sometimes you have to turn down something that you sought competitively.

Undergrad or graduate school admission offers. Job offers. Fellowships.

Occasionally, research support grants.

Do you list these things on your CV? I can see the temptation.

If you view your CV as being about competitive accolades. But we don't do that. In academics your CV is a record of what you have done. Which undergraduate University conferred a degree upon you. Which place granted your doctorate. Who was silly enough to hire you for a real job.

We don't list undergrad or grad school bids or the places that we turned down for a job offer.

So don't list grants you didn't take either.

83 responses so far

« Newer posts Older posts »