In baseball, the fans have a tremendous affection for home run hitters. You may perhaps recall the gripping excitement when
juiced pharmaceutically enhanced sluggers Sammy Sosa and Mark McGwire were battling in 1998 for the single season Home Run Record. Even those who are not fans of baseball could understand this competition and the feat intrinsic to hitting a home run.
Esoterica like batting average and on-base percentage are, of course, far more useful to the actual winning of baseball games, but people who excel in these categories receive comparatively less attention. More on this later. The thing about home run hitters is that they strike out. A lot. Take a look at this list of top home run sluggers and this list of strikeout leaders (pitchers apparently left off). Tremendous overlap, right?
Now, as you all know we are fond of debating career issues in science around these parts. One perennial topic is the severe culling process by which only a fraction of those who acquire a science PhD go on to a long and happy career as a major grant-funded academic scientist at the professorial and/or Principal Investigator level. A minor fraction.
As the NIH-focused scientific community is engaging (formally and informally) in some introspection about the proper state of the workforce, some questions may arise about the quality of various training efforts. As I may have mentioned before, post-graduate training programs get evaluated on periodic basis by external committees. My program was evaluated when I was a graduate student and I've seen the outcome of at least one such review of a program as a participating faculty member. It strikes me* that some of the outcome measures of interest to such reviews may paste on to the baseball analogy, in this case with respect to swinging for the fences.
In this analogy, let attaining the faculty job represent a home run for the training program. No need to limit this to graduate students, we can also use this rubric for assessing an institution/department or a NIH-funded postdoctoral training grant for the postdoctoral training interval as well. The strikeout? Well, for argument's sake, let's say that this is an academic zero for the trainee in question- which means no authored publications. Not just first author, a goose egg on any author credit.
Remember these evaluations of graduate programs (and training grants) can be considered to last the life of the program. No need to focus on just a 3 or 6 year interval...assume these trainees are followed essentially forever.
What would be an ideal balance of outcomes? Suppose a program is a Sammy Sosa / Mark McGwire type program and places, for argument's sake, 50% of its graduates in faculty positions. Nice high-falutin' faculty positions, mind you. Which pretty much assumes these folks are getting first author credits on very high Impact Factor journal publications.
However, also let us suppose this comes at a high strikeout cost. Say 25% of the trainees get a few pubs but they are middle authors and these people just sort of go on to have careers not viewed as the Big Leagues- perhaps in biotech, BigPharma, career academic sub-PI scientists. And the remaining 25% are blanked. Zero publication credits, not even as middle author.
Contrast this with your favored ratio. Would it be better if 100% of authors get some presence on an authorship line? If all get a first author paper, no matter how humble the journal? Is there anything to raise an alarm about an appreciable strikeout rate when it comes to trainees and publishing scientific work? Would it be okay if "everybody publishes" comes at the cost of reducing the home runs significantly? Suppose all trainees publish, all get first authorships in society level journals...but only 10% get faculty jobs?
I throw this to you, Dear Reader. What balance would you like to see** for a training program? What ideal tradeoffs would you prescribe (or review favorably)? Does it differ for postdoctoral versus predoctoral training?
Are other measures more akin to batting average and on-base percentage more likely to benefit the ultimate goal of winning baseball games (some nebulous concept of scientific output that contributes in a real way)? Or is the purpose really just to aggrandize personal stats and other (dare I say) inside-baseball, flashy goals such as Ivy League professorships and multi-R01 funded laboratories?
*actually this was in a conversation with someone who may or may not wish to self-identify in the comments
**be sure to mention if you find the example of very high home-run rate combined with appreciable numbers of strikeouts in this context to be entirely foreign or familiar in your science experiences.