Swinging for the fences

In baseball, the fans have a tremendous affection for home run hitters. You may perhaps recall the gripping excitement when juiced pharmaceutically enhanced sluggers Sammy Sosa and Mark McGwire were battling in 1998 for the single season Home Run Record. Even those who are not fans of baseball could understand this competition and the feat intrinsic to hitting a home run.

Esoterica like batting average and on-base percentage are, of course, far more useful to the actual winning of baseball games, but people who excel in these categories receive comparatively less attention. More on this later. The thing about home run hitters is that they strike out. A lot. Take a look at this list of top home run sluggers and this list of strikeout leaders (pitchers apparently left off). Tremendous overlap, right?

Now, as you all know we are fond of debating career issues in science around these parts. One perennial topic is the severe culling process by which only a fraction of those who acquire a science PhD go on to a long and happy career as a major grant-funded academic scientist at the professorial and/or Principal Investigator level. A minor fraction.

As the NIH-focused scientific community is engaging (formally and informally) in some introspection about the proper state of the workforce, some questions may arise about the quality of various training efforts. As I may have mentioned before, post-graduate training programs get evaluated on periodic basis by external committees. My program was evaluated when I was a graduate student and I've seen the outcome of at least one such review of a program as a participating faculty member. It strikes me* that some of the outcome measures of interest to such reviews may paste on to the baseball analogy, in this case with respect to swinging for the fences.

In this analogy, let attaining the faculty job represent a home run for the training program. No need to limit this to graduate students, we can also use this rubric for assessing an institution/department or a NIH-funded postdoctoral training grant for the postdoctoral training interval as well. The strikeout? Well, for argument's sake, let's say that this is an academic zero for the trainee in question- which means no authored publications. Not just first author, a goose egg on any author credit.

Remember these evaluations of graduate programs (and training grants) can be considered to last the life of the program. No need to focus on just a 3 or 6 year interval...assume these trainees are followed essentially forever.

What would be an ideal balance of outcomes? Suppose a program is a Sammy Sosa / Mark McGwire type program and places, for argument's sake, 50% of its graduates in faculty positions. Nice high-falutin' faculty positions, mind you. Which pretty much assumes these folks are getting first author credits on very high Impact Factor journal publications.

However, also let us suppose this comes at a high strikeout cost. Say 25% of the trainees get a few pubs but they are middle authors and these people just sort of go on to have careers not viewed as the Big Leagues- perhaps in biotech, BigPharma, career academic sub-PI scientists. And the remaining 25% are blanked. Zero publication credits, not even as middle author.

Contrast this with your favored ratio. Would it be better if 100% of authors get some presence on an authorship line? If all get a first author paper, no matter how humble the journal? Is there anything to raise an alarm about an appreciable strikeout rate when it comes to trainees and publishing scientific work? Would it be okay if "everybody publishes" comes at the cost of reducing the home runs significantly? Suppose all trainees publish, all get first authorships in society level journals...but only 10% get faculty jobs?

I throw this to you, Dear Reader. What balance would you like to see** for a training program? What ideal tradeoffs would you prescribe (or review favorably)? Does it differ for postdoctoral versus predoctoral training?

Are other measures more akin to batting average and on-base percentage more likely to benefit the ultimate goal of winning baseball games (some nebulous concept of scientific output that contributes in a real way)? Or is the purpose really just to aggrandize personal stats and other (dare I say) inside-baseball, flashy goals such as Ivy League professorships and multi-R01 funded laboratories?

__
*actually this was in a conversation with someone who may or may not wish to self-identify in the comments

**be sure to mention if you find the example of very high home-run rate combined with appreciable numbers of strikeouts in this context to be entirely foreign or familiar in your science experiences.

38 responses so far

  • D. C. Sessions says:

    If [1] your single goal is to produce home runs, your best strategy is to throw all possible resources towards home-run hitters and screw the rest. As in, divert resources (time, lab space, mentoring, funds) from anything they're doing for their own work and dedicate them to offloading nonproductive tasks from the Chosen Ones.

    It's not like that isn't a common workplace strategy already, and you might as well get them used to reality.

    [1] That's the question now, isn't it?

  • Namnezia says:

    I think your analogy is not quite right. Because this implies that in labs where lots of high-impact papers are being published there's no room for publications in more run of the mill journals. It's not like if your paper is not accepted in C/S/N then it won't be published. In most cases, it will eventually be published somewhere else, so in these labs almost nobody strikes out. Sure, some people will fall through the cracks in a big lab, but this is different than swinging for the fences and striking out. So having everyone publish does not come at the cost having the lab produce some fancy pubs. I would like to see a high percentage of people get faculty positions and everyone publishing at least one first author paper during their stay in the lab. And I don't think this is unrealistic, especially in a fancy lab.

  • drugmonkey says:

    It's not like if your paper is not accepted in C/S/N then it won't be published. In most cases, it will eventually be published somewhere else, so in these labs almost nobody strikes out.

    There are labs* in which the BigCheez will not pollute his/her good name with lesser journals and/or stories. Glamour or nothin'. In such labs, it is entirely possible for a postdoc to labor away for 5 years and come up blank as far as first-author pubs go. No pubs at all**? Surely this must exist as well....

    *sigh, yes, I know of them personally. Not just rumor.

    **nebulous rumor of the day, not first hand info

  • drugmonkey says:

    everyone publishing at least one first author paper during their stay in the lab.

    For anything more than summer internships and rotation students, I agree with you. If this does not obtain, something has gone seriously wrong and the finger points first* at the PI.

    *not exclusively, just first.

  • Namnezia says:

    There are labs* in which the BigCheez will not pollute his/her good name with lesser journals and/or stories. Glamour or nothin'. In such labs, it is entirely possible for a postdoc to labor away for 5 years and come up blank as far as first-author pubs go. No pubs at all**? Surely this must exist as well....

    While such labs may exist (I can think of one off the top of my head), in my experience it's not the norm, even for HoityToity labs. Usually what I see is that people will go off in a dead end project, not seek advice, and basically go unnoticed for five years and have nothing to show for it.

  • Anon2 says:

    I thought you were going to go a different direction with this analogy. "Home runs" being splashy CSN type science in huge program project grants vs a good batting average and strong on-base record being the more bread and butter type science. Do all new faculty have to swing for the fences, hoping to score some home runs in order to get tenure, or can a more consistent but less spectacular approach of getting solid publications, reasonable funding, etc. be enough? This has been on my mind lately because I'll be going up for tenure next year. I don't have a flashy publication record or a ton of funding. But I do have good publications, reasonable funding and a good track record of placing former students and postdocs in good positions. Fortunately my institution seems to value these contributions. But I do worry that too much money is thrown at those who hit a few home runs at the expense of the solid team players.

    To answer your question, the analogy is pretty far from my experience as a grad student at a top university. But there were others in my department (in different labs) that ended up as the zero and it wasn't pretty.

  • anon says:

    "...these people just sort of go on to have careers not viewed as the Big Leagues- perhaps in biotech, BigPharma, career academic sub-PI scientists."

    This is a horrible analogy and an elitist attitude. I've known excellent scientists (those who published multiple times in Glamour mags from both post-doc and grad student capacities) who have gone on to have spectacular careers in industry. I can't believe that a graduate program would rank itself according to how many of their trainees obtain faculty positions - for which there is now not enough support. A better metric might be to quantify those who have successfully started their own companies, who are in scientific leadership positions of any kind (whether in biotech, journal editor, etc). It seems narrow-minded to limit the idea of success, defined here as a "home run," to obtaining a tenure track faculty position. fuck that.

  • gerty-z says:

    just because it is narrow minded doesn't mean that is how it is done. It is pretty difficult to define when someone is in a "scientific leadership position". But, it is easy to tell if they are the PI of a lab. It may be elitist, etc. but as a metric it is pretty straightforward.

    As for the question at hand, I agree with Nam. I think that in most labs you don't necessarily strike out if you don't hit a home run.

  • drugmonkey says:

    The main principle of the discussion is not really affected by what you put in the Major League versus non-Major category, anon. Categorize as you like and then address the points at hand, I think you'll find the specifics are unimportant.

  • Dr Becca says:

    Are we talking about individual labs or training programs, here? Because I think the analogy works for the former, but not the latter. Yes, there are definitely PIs who insist their trainees work without publishing until they have enough data for a CNS paper, risking scoopage, burnout, etc. You come out either a rock star or a basket case.

    But the training programs themselves don't determine where the students submit their papers. The training program's objective is to recruit the most talented and (in theory) likely-to-succeed doctoral candidates they can, and for the top programs, I would argue that their candidate pool are NOT the most likely to "strike out," as homerun hitters are.

  • pitchers apparently left off

    Dude, starting pitchers don't accumulate enough strikeouts in a career to come close to even the bottom of that list. For example, Tom Seaver pitched 18 years in the National League, and had 485 career strikeouts. Steve Carlton played 23 years in the National League, and had 413 career strikeouts. 500 career strikeouts doesn't even get you in the top 1000 all time.

    Jeezus fucke, holmes, how fucken stupid are you?

  • drugmonkey says:

    Training programs have a responsibility to function well*. That is why they are periodically reviewed, as I mentioned in the post...

    *what "well" means is the discussion for today.

  • proflikesubstance says:

    The situation you mention is largely foreign to me. It is extremely rare IME for someone to graduate from a lab without at least one first author pub in my field. Maybe it's not a glamor pub, but it represents that student's work.

  • DJMH says:

    Some labs certainly have a NOGTFO (Nature or GTFO) attitude.....they are not usually friendly places for grad students. But simply having one or more of those labs in a department isn't necessarily evidence that the grad program itself is bad for students. The question to ask 6th year students without papers is whether anyone in the program ever discouraged them from joining the NOGTFO. Some students hear this advice plenty but don't take it, and it would be silly to judge a grad program on its more idiotic members.

  • BugDoc says:

    The purpose of the training program is to impart scientific thinking and experience to students, so first author publications are a key measure of whether the training program is fulfilling its mission. Whether those papers are in GlamorMagz or not is capricious and depends on individual labs - that is not a reliable indicator of the quality of training. As far as where the trainees go, study sections should be discouraged from using the % of tenure track faculty positions as a metric for success of the program, (1) because there aren't that many TT positions out there, (2) students make career choices that may be completely independent of their accomplishments and the quality of the program and (3) because it is critical to acknowledge that there are many "successful" paths in science, including industry, science journalism, teaching, etc. It's neither realistic nor desirable for all of our trainees to become academic scientists. Continued progress of trainees in any science-related career would be a better metric.

  • drugmonkey says:

    So would you hammer a GoBig or GoHome program if you were reviewing it, BugDoc?

  • anon says:

    DM - I think requiring a first-authored publication of a PhD student as part of their requirement is not the same as "go big or go home". That's my take on what BugDoc is saying. I also agree with the statement: "Continued progress of trainees in any science-related career would be a better metric." - it was the point I was trying to make earlier.

    I must admit I get a little boiled up about emphasizing the academic career track as a measure of success. I am using NIH grant numbers - since about 1/3 of R01 or equivalent grantees are female, this would indicate that a disproportionate number of women enter this career track. The rest are assumed to pursue different careers. I know at least one who chose to be a stay-at-home Mom for a couple of years before she got a job at a company. Does this mean that she's a failure? Does this mean that the graduate program where she got her Ph.D. is not up-to-par because she, and likely many other women, chose this route? Does this also mean that graduate programs that are predominantly male (if there are any) should be favored, because, well, men just happen to make up 2/3 of the academic tenure track? I'm just sayin.

  • drugmonkey says:

    If I saw a program where women were disproportionally choosing alternate paths that would be a matter of concern, yes. Relative to other similar programs, of course.

  • BugDoc says:

    "So would you hammer a GoBig or GoHome program if you were reviewing it, BugDoc?"

    Hammer it? No. Consider it a critical concern that needs to be addressed? Yep. Obviously depends on other attributes of the program. I would not criticize the Go Big part per se, but point out that the GTFO part of the cohort was being poorly served by the training program. Up to them how to address that issue.

  • DrugMonkey says:

    If I saw evidence a training program, grad or postdoc, was throwing 25-50% of trainees under the bus in pursuit of GlamourPubChasing I'd be coming down like thunder and the hand of God herself on that program.

  • BugDoc says:

    what would you consider "evidence"? You've only got the numbers. There could be many reasons why 25-50% trainees were not doing well (i.e., no publications) in a given training program. It's a problem no matter what, but depending on the nature of the problem, the solutions might be different and have nothing to do with GlamourPubs. That's something the program has to figure out.

  • Alex says:

    I wonder how many people would admit that the reason their students aren't publishing is that anything less than Nature isn't good enough. You're visiting a department. A student says that the professor is being mean and wouldn't resubmit a paper to a society-level journal after being rejected from Science/Nature/Nature progeny/PNAS/Cell. The professor says that the reviewers identified some significant issues that need to be addressed regardless of where the work is ultimately resubmitted. The professor is not impressed with the student's progress on addressing these issues. The student says something else. You might be in the same general area, but you aren't in the specific sub-sub-field of that lab. You also have a couple of days and 2 dozen other labs to visit during this department review. What do you say?

  • I must admit I get a little boiled up about emphasizing the academic career track as a measure of success.

    NIH mandates that this is the measure of success of a training program.

  • anon says:

    CPP - where does the NIH document this? It seems intuitive to think that a measure of success is where trainees reach their expected goals. Expected goals may vary according to individual and should include a range of jobs in either the industry or academic sectors. Employment in a relevant scientific endeavor being the key. If the measure of success is limited to a faculty position in academia, we indeed have a sad and bleak situation - one in which only a few will "make it," and one that looks to be heavily biased against women (only 30% are expected to reach this point, even lower at higher ranks).

  • drugmonkey says:

    There could be many reasons why 25-50% trainees were not doing well (i.e., no publications) in a given training program. It's a problem no matter what, but depending on the nature of the problem, the solutions might be different and have nothing to do with GlamourPubs.

    My imagination fails me then. I know what it takes to compete continually at the Glamour level. It requires a lot of person hours that would otherwise be publication credit, even first author credit, being subsumed below the waterline of publishable data. Not even in the copious Supplementary Materials part. So if I saw a program with a lot of trainees getting CNS papers and a lot more getting blanked, sorry but that is not "other factors". That is GlamourCulture at work.

  • drugmonkey says:

    Seems odd that the T32 mech is left off this page http://cms.csr.nih.gov/peerreviewmeetings/reviewerguidelines/ no? or am I just not seeing it?

  • BugDoc says:

    "NIH mandates that this is the measure of success of a training program."

    That is currently the case for many training programs, although I have heard discussions suggesting that criteria may change to reflect the fact that the academic job market is very limited right now. I suspect the reason that the T32 mechanism may not be on the info page, DM, is that training programs can have very different goals and outcomes. For example, training programs that involve more practical applications like bioengineering or pharmacology may expect to have a much higher percentage of people going into industry, which is accepted as a successful outcome in such programs.

  • BugDoc says:

    "If I saw evidence a training program, grad or postdoc, was throwing 25-50% of trainees under the bus in pursuit of GlamourPubChasing I'd be coming down like thunder and the hand of God herself on that program."

    By the way, how exactly would you be coming down like thunder practically speaking? If you hosed the program on the first submission for that reason, and they responded in resubmission by saying "from now on, we will change our GlamourCulture, sir, yes sir!", would you give them a favorable score based on that? It's not like they could show evidence of changing GlamourCulture o/n. So you either take their word for it (not Hand-o'God quality smackdown), or you justify cutting a program that produces a goodly number of superstars/GlamourPubs. Hmmm....I'd love to be a fly on that wall at that study section.

  • anon says:

    DM - T32 review criteria can be found here:
    http://grants.nih.gov/grants/peer/critiques/t32.htm
    If you scroll down to where it says "Training record", there is mention of a measure of success. It says "How successful are the trainees in achieving productive scientific careers, as evidenced by successful competition for research grants, receipt of honors or awards, high-impact publications, receipt of patents, promotion to scientific leadership positions, and/or other such measures of success?"
    Although some aspects of this statement may imply an academic career, there is nothing that says that success MUST be defined by whether the trainee continues on in academia. However, they do mention specifically (your favorite) "high impact publications".

  • drugmonkey says:

    However, they do mention specifically (your favorite) "high impact publications".

    That turns up in many places in NIH review advice. If you have been around the block a few times you'll find that this means basically anything peer reviewed, with a pulse and preferably above dump journal status now and again. Eye of beholder kind of thing. I've seen the same set of what I call society level journals referred to as "high impact" in one context and busted on for being too pedestrian in other contexts.

    I should mention too, that I have definitely seen evidence of, for example, active BigPharma and/or biotech career engagement viewed as good outcome. And depending on the timeline and company specifics their pubs might vanish at that point. But it is the job of the training environment or grant to show how awesome the alternatives are in addition to the traditional academic trainees. We're predicating our discussion on the lack of even that evidence in a big chunk of the trainees.... stipulate, if you will, that my goose egg is real, not a snipe at non-traditional-academic career pathways.

  • Dave says:

    What's particularly annoying is that program quality is often judged better as the number of NIH-funded PIs in the program increases. In other words: programs with the most funding should have the most funding. WTF? How does this even make sense? It's like giving drugs only to the patients whose health seems to be improving, because they are the ones who will most likely end up healthy.

    Every training grant proposal should be required to state up front what it's goals are -- same as any other project proposal. If the goal is to produce future PIs / NIH applicants, so be it. If the goal is to produce community college instructors, fine. If the goal is to produce a skilled workforce for the pharmaceutical industry, super. Different institutions and programs have different sorts of trainees, and strengths. Not everyone needs to be (or should) be doing the same thing. In fact, it could be argued that the LAST thing NIH should be doing right now is spending money to produce future NIH applicants. Which means that the way NIH currently evaluates training programs is way off target. (or 'off base', to further smangle DM's baseball analogy)

    Dont'cha think?

  • drugmonkey says:

    w does this even make sense? It's like giving drugs only to the patients whose health seems to be improving, because they are the ones who will most likely end up healthy.

    This is called, wait for it, triage and is in fact practiced in medicine. Particularly of the emergency variety of medical care.

    In the current climate of NIH budgetary stagnation, should we not focus on te healthiest labs and let the less promising ones die off?

  • anon says:

    DM - you've got that backwards. Patients who appear healthy enough are the ones who are triaged, and the ones who come in looking the worst off get the attention. Yes, a healthy and productive lab deserves to keep going. But determining factors of "less promising" labs should not necessarily be that they're lacking funding. Why support a good lab at the expense of another that could potentially produce results equally, if not better?

  • DM - you've got that backwards.

    No, you apparently have no idea how emergency triage works. Triage separates cases into at least three categories: hopeless, needs treatment now, can wait for treatment. Limited resources are not wasted on hopeless cases, and are immediately devoted to the needs treatment now category.

  • drugmonkey says:

    Patients who appear healthy enough are the ones who are triaged, and the ones who come in looking the worst off get the attention.

    um, say what?

    Are you suggesting that productive, one to three grant labs that look "stable" are getting preferentially punished now a days? That the NIH is extending a lot of R56 Bridge and exception-funding life boats to terribly undeserving, underproducing laboratories?

  • anon says:

    "Are you suggesting that productive, one to three grant labs that look "stable" are getting preferentially punished now a days? That the NIH is extending a lot of R56 Bridge and exception-funding life boats to terribly undeserving, underproducing laboratories?"

    No and no. First, CPP: yikes. I would assume hopeless means already dead. What else would qualify as hopeless? I was referring to the distinction between "needs treatment now" and "can wait for treatment". DM, I had in mind new labs that have not yet been able to get funding and wasn't thinking of the R56. Are you suggesting then that these labs be eliminated to keep the 3+ grant supported labs going at their current funding level?

  • drugmonkey says:

    Brand new labs are not "hopeless" cases that need to be triaged out as a class, no.

  • First, CPP: yikes. I would assume hopeless means already dead. What else would qualify as hopeless?

    Jeezus fucke, holmes. Have you tried googling motherfucken "triage"? The shitte is not that fucken complicated.

Leave a Reply