We always seem to go through a run of manuscript rejections and then a run of accepts. I wish we could figure out how to spread these out a little bit more. Would alternation be too much trouble, world?
Archive for the 'Careerism' category
For whatever reasons I was thinking of at the time I was motivated to twttr this:
Periodic reminder to publish. If you don't have manuscripts under first review by April... 2016 pub year is already slipping away from you.
— Drug Monkey (@drugmonkeyblog) February 2, 2016
What I mean by this is that somewhere on the pile of motivations you have to finish that manuscript off and get it submitted, you should have "keeping your calendar years populated".
It may not always be a big deal and it certainly pales in comparison to JIF factors, but all else equal you want to maintain a consistent rate of output of papers. Because eventually someone will look at that in either an approving or disapproving way, depending on how your publication record looks to them.
Like it or not, one way that people will consider your consistency over the long haul is by looking at how many papers you have published in each calendar year. Published. Meaning assigned to a print issue that is dated in a particular calendar year. You cannot go back and fill these in when you notice you have let a gap develop.
If you can avoid gaps*, do so. This means that you have to have a little bit of knowledge about the typical timeline from submission of your manuscript for the first time through until the hard publication date is determined. This will vary tremendously from journal to journal and from case to case because you don't know specifically how many times you are going to have to revise and resubmit.
But you should develop some rough notion of the timeline for your typical journals. Some have long pre-print queues. Some have short ones. Some move rapidly from acceptance to print issue. Some take 18 mo or more. Some journals have priority systems for their pre-print queue and some just go in strict chronological order.
And in this context, you need to realize something very simple and clear. Published is published.
— Chris Cole (@drchriscole) February 2, 2016
Yes, mmhmm, very nice. Pre-print archives are going to save us all. Well, this nonsense does nothing for the retrospective review of your CV for publication consistency. At present the culture of scientific career evaluation in the biomedical disciplines does not pay attention to pre-print archives. It doesn't really even respect the date of first appearance online in a pre-publication journal queue. If your work goes up in 2016 but never makes it to a print article until 2017, history will cite it as 2017.
*Obviously it happens sometimes. We can't always dictate the pace of everything in terms of results, funding, trainee life-cycles, personal circumstances and whatnot. I'm just saying you should try to keep as consistent as possible. Keep the gaps as short as possible and try to look like you are compensating. An unusually high number of pubs following a gap year goes a long way, for example.
An email from the CSR of the NIH hit
late yesterday a few days ago, pointing to a number of their Peer Review Notes including one on the budget bump that we are about to enjoy.
Actually that should be "some selected few of us will enjoy" because
“While $2 billion is a big increase, it is less than a 10 percent increase, and a large portion of it is set aside for specific areas and initiatives,” said Dr. Nakamura. “Competition for funding is still going to be intense, and paylines will not return to historic averages . . .
Yeah, as suspected, that money is already accounted for.
The part that has me fired up is the continuation after that ellipsis and a continuing header item.
So make sure you put your best effort into your application before you apply.”
“We know some research deans have quotas and force their PIs to submit applications regularly,” said Dr. Nakamura. “It’s important for them to know that university submission rates are not correlated with grant funding. Therefore, PIs should be encouraged to develop and submit applications as their research and ideas justify the effort to write them and have other scientists review them.”
As usual I do not know if this is coming from ignorance or calculated strategy to make their numbers look better. I fear both possibilities. I'm going from memory here because I can't seem to rapidly find the related blog post or data analysis but I think I recall an illustration that University-total grant submission rates did not predict University-total success rates.
At a very basic level Nakamura is using the lie of the truncated distribution. If you don't submit any grant applications, your success rate is going to be zero. I'm sure he's excluding those because seemingly that would make a nice correlation.
But more importantly, he is trying to use university-wide measures to convince the individual PI what is best for her to do.
Wrong. Wrong. Wrong.
Not everyone's chances at that institution are the same. The more established investigators will probably, on average, enjoy a higher success rate. They can therefore submit fewer applications. Lesser folk enjoy lower success rates so therefore they have to keep pounding out the apps to get their grants.
By extension, it takes very little imagination to understand that depending on your ratio of big important established scientists to noobs, and based somewhat on subfields, the apparent University-wide numbers are going to swamp out the information that is needed for each individual PI.
In short, this is just another version of the advice to young faculty to "write better grants, just like the greybeards do".
The trick is, the greybeards DO NOT WRITE BETTER GRANTS! I mean sure, yes, there is a small experience factor there. But the major driver is not the objective quality but rather the established track record of the big-deal scientist. This gives them little benefits of the doubt all over the place as we have discussed on this blog endlessly.
I believe I have yet to hear from a new-comer to NIH grant review that has not had the experience within 1-2 rounds of a reviewer ending his/her review of a clearly lower-quality grant proposal with "....but it's Dr. BigShot and we know she does great work and can pull this off". Or similar.
I have been on a study section round or two in my day and I am here to tell you. My experience is not at all consistent with the idea that the "best" grants win out. Merit scores are not a perfect function of objective grant quality at all. Imperfectly prepared or boring grants get funded all the time. Really exciting and nearly-perfect grants get unfundable scores or triaged. Frequently.
This is because grant review hinges on the excitement of the assigned reviewers for the essence of the project. All else is detail.
You cannot beat this system by writing a "perfect" grant. Because it may not be perfect for all three reviewers no matter how well it has been prepared and how well vetted by whatever colleagues you have rounded up to advise you.
Nakamura should know this. He probably does. Which makes his "advice" a cynical ploy to decrease submissions so that his success rate will look better.
One caveat: I could simply be out of touch with all of these alleged Dean-motivated crap apps. It is true that I have occasionally seen people throw up grant applications that really aren't very credible from my perspective. They are very rare. And it has occasionally been the case that at least one other reviewer liked something about an application I thought was embarrassingly crappy. So go figure.
I also understand that there are indeed Deans or Chairs that encourage high submission rates and maybe this leads to PIs writing garbage now and again. But this does not account for the dismal success rates we are enjoying. I bet that magically disappearing all apps that a PI submitted to meet institutional vigor requirements (but didn't really mean to make a serious play for an award) would have no perceptible effect on success rates for the rest of us. I just haven't ever seen enough non-credible apps for this to make a difference. Perhaps you have another experience on study section, DearReaders?
Finally, I really hate this blame-the-victim attitude on the part of the CSR and indeed many POs. There are readily apparent and demonstrable problems with how some categories of PIs' grants are reviewed. Newer and less experienced applicants. African-American PIs. Women. Perhaps, although this is less well-explicated lately, those from the wrong Universities.
For the NIH to avoid fixing their own problems with review (for example the vicious cycle of study sections punishing ESI apps with ever-worsening scores when the NIH used special paylines to boost success rates) and then blame victims of these problems by suggesting they must be writing bad grants takes chutzpah. But it is wrong. And demoralizing to so many who are taking it on the chin in the grant review game.
And it makes the problems worse. How so? Well, as you know, Dear Reader I am firmly convinced that the only way to succeed in the long term is to keep rolling the reviewer dice, hoping to get three individuals who really get what you are proposing. And to take advantage of the various little features of the system that respond to frequent submissions (reviewer sympathy, PO interest, extra end of year money, ARRA, sudden IC initiatives/directions, etc). Always, always you have to send in credible proposals. But perfect vs really good guarantees you nothing. And when perfect keeps you from submitting another really good grant? You are not helping your chances. So for Nakamura to tell people to sacrifice the really good for the perfect he is worsening their chances. Particularly when the people are in those groups who are already at a disadvantage and need to work even harder* to make up for it.
*Remember, Ginther showed that African-American PIs had to submit more revisions to get funded.
Should people without skin in the game be allowed to review major research grants?
I mean those who are insulated from the results of the process. HHMI stalwarts, NIH intramural, national labs, company scientists...
On one hand, I see argument that they provide needed outside opinions. To keep an insular, self-congratulating process honest.
On the other, one might observe that those who cannot be punished for bad behavior have license to be biased, jerky and driven by personal agenda.
Would you prefer review by those who are subject to the funding system? Or doesn't it matter?
Yes, I realize the science these days is super collaborative and needs expensive tools, models and techniques to be cool.
Strategically as a lab, you need to have a bread and butter data stream that you produce in house. Data that you generate, interpret, understand and publish without the input of any other lab groups. Data that is, in and of itself, capable of generating publications that meet at least the lower bound expectations of your department, subfield and whomever else is evaluating you.
This may not be the same thing over the long haul, either. Interests change. But the thing that never changes is that nobody is going to find your publication goals, demands or needs as critical as you do. And in this game, not publishing is simply not an option.
So figure out your data stream and protect it.
January is a great time to look at yourself in the mirror and ask what your plan is for improving your record of publication.
What are your usual hurdles that get in the way? What are the current hurdles?
What works to get you moving?
My biggest problem is me.
We're at the point in my lab where available data are not really the issue, we have many dishes cooking along in parallel at most times. Something is always ready or close to being ready to serve up.
The problem is almost always the wandering of my attention and my energy to kick something over the final step to submission.
The game I have taken to playing with myself is to see how long I can go with at least one manuscript under review. I made it something like 14 mo a few years ago. Of course I then promptly fell into another extended dry spell but....
The other game I play with myself is to see how many manuscripts we can have under review simultaneously. That is, of course, much more subject to the ebb and flow of project maturation and the review process. But if we happen to have a few stacking up, sure I'll use the extra motivation to keep my attention pegged to finishing a draft.
When all else fails there is always "We need this published in order to help get this next grant funded, aiieeeee!"
Scientifically, that is.
it's hard when sr. PIs ask you where you hope your lab is in 5 yrs because the honest answer is: I hope I'm still here (1)
— Zoe McElligott (@nanopharmNC) January 12, 2016
I like the answer Zoe gave for her own question.
I, too, just hope to be viable as a grant funded research laboratory. I have my desires but my confidence in realizing my goals is sharply limited by the fact I cannot count on funding.
Edited to add:
When I was a brand new Assistant Professor I once attended a career stage talk of a senior scientist in my field. It wasn't an Emeritus wrap-up but it was certainly later career. The sort of thing where you expect a broad sweeping presentation of decades of work focused around a fairly cohesive theme.
The talk was "here's the latest cool finding from our lab". I was.....appalled. I looked over this scientist's publication record and grant funding history and saw that it was....scattered. I don't want to say it was all over the place, and there were certain thematic elements that persisted. But this was when I was still dreaming of a Grande Arc for my laboratory. The presentation was distinctly not that.
And I thought "I will be so disappointed in myself if I reach that stage of my career and can only give that talk".
I am here to tell you people, I am definitely headed in that direction at the moment. I think I can probably tell a slightly more cohesive story but it isn't far away.
I AM disappointed. In myself.
And of course in the system, to the extent that I think it has failed to support my "continuous Grande Arc Eleventy" plans for my research career.
But this is STUPID. There is no justifiable reason for me to think that the Grande Arc is any better than just doing a good job with each project, 5 years of funding at a time.
I don't care what stage of the doctoral arc you inhabit, having science minions helps move your science forward.
The more minions, the better, assuming you have the resources available to fill their time productively.
If you don't want moar data than you can generate with your own two hands, this is not the right career for you.
Found this on the Facebooks. It seems appropriate for a science-careers audience:
There was a farmer who grew excellent quality corn. Every year he won the award for the best grown corn. One year a newspaper reporter interviewed him and learned something interesting about how he grew it. The reporter discovered that the farmer shared his seed corn with his neighbors. “How can you afford to share your best seed corn with your neighbors when they are entering corn in competition with yours each year?” the reporter asked.
“Why sir,” said the farmer, “Didn’t you know? The wind picks up pollen from the ripening corn and swirls it from field to field. If my neighbors grow inferior corn, cross-pollination will steadily degrade the quality of my corn. If I am to grow good corn, I must help my neighbors grow good corn.”
— Drug Monkey (@drugmonkeyblog) December 31, 2015
#LabGoals2016 Solve nagging "that shouldn't do that" science question we've been struggling with for two years.
— Drug Monkey (@drugmonkeyblog) December 31, 2015
Sometweep or other mentioned career goals for the year. I don't really set lab goals at all....too busy just keeping on with whatever is in front of me? Maybe this is a bad idea?
I came up with the above as an off the cuff response.
Anyway, now I am curious if you set goals for yourself in the academic career and science profession space?