Fairly frequently in the blogosphere one runs across blog posts that one really likes. Or even that one wishes to have written oneself. And perhaps a post or two that one has been planning to write oneself.
I've recently run across a post on The "three-paper" rule by Massimo at Exponential Book blog that draws together a series of observations on the purpose of graduate school training. I doubt that I could improve much on this discussion but I did have one or two small jumping off points that Massimo overlooked.
Getting back to the main point at hand, another way to put the question in the title is to ask quite simply, "What does it take to earn a Doctorate in Philosophy?"
Most people in a mentoring role understand that it is absolutely essential to communicate to undergraduates who plan post-graduate study of the realistic duration of training until the PhD is awarded. In the US, at any rate, the median time to completion for a lot of biomedical science related departments is around 6 years. This can come as a bit of a surprise to undergraduates who more typically anticipate a 4 year duration. As with the working hours, pay and many other facts-o-graduate-school-life, it is in nobody's interest for the entering graduate students to think it is a 4-year and out deal.
Doing a little research, I find that median registered time to completion for biomedical science PhDs in the early 1970s was right around 5.5 years. This number rose to a highwater mark of about 6.9 years in the late 1990s. Over the same interval of time, of course, the postdoctoral training interval was expanding dramatically. The number of PhD scientists in postdoctoral training at a given year following the PhD was rising, as was the number of postdoctoral years prior to landing a faculty appointment.
So who cares about one piddling little year in graduate school, right?
Well, of course it is part of the problem and it needs to be addressed. In fact, the NIH has been putting pressure on graduate departments to rein in their time-to-completion numbers. How, you ask? Via the pursestrings, of course. I don't have any official links handy at the moment but certainly several colleagues are reporting their understanding that if they don't try to reduce the number of 6 yr+ PhD students, that training grant funds will become increasingly difficult to procure.
Response has been mixed. I applaud those of my colleagues who agree that it is a good idea to keep graduate training under 6 years. Personally I think it should not be the median but something like the 95%ile that is under 6 years. I'm willing to accept that 4 years may be too short of an interval to shoot for as a graduate program median so for argument's sake...5 years?
Trying to land on a number, and grappling withe reasons for graduate studies extending past 6 years, brings us right back to the titular question. What is graduate training for? What are we trying to accomplish? When do we decide that a young scientist in training is worthy of being awarded the Ph.D. degree?
The defenders of the ever-expanding time-to-completion are frequently focused on the topic that drew my eye over at the Exponential Book blog.
In many disciplines, the widespread expectation is that by the time doctoral candidates are ready to take the final exam, their curriculum vitae will sport a number of publications, most of which with them as leading authors and/or in high-impact journals in their field of study.
I should point out that I trained in an environment/time when what I call the monolithic thesis approach still more or less ruled the day. It was giving way, however, to what we termed the "staple dissertation". This latter was a dawning situation whereby if a graduate student had managed to publish about three or more papers during graduate study, she would more or less add some limited introductory or transitioning remarks in between the papers, staple the whole thing together and submit it as the dissertation. The focus, clearly, was on the publication of papers over the generation of a monolithic tome of meticulous review, experimental methods and results and a whole lot more theorizin' about what it all meant.
Personally I think there is still room for both approaches or a hybrid of the two in forming a plan of attack on the dissertation and thesis defense. Different strokes and all that (RIP Gary Coleman). There is a tendency for either of these approaches, however, to be gated on experimental success.
This is where things get tricky.
Let us acknowledge right off the bat that it is easy to fail, as a scientist, due to incompetence, inability, lack of aptitude and/or effort. Sure. And in some senses we do not want to award the PhD to an individual who fails because of these traits. But as Massimo observed:
Lest we forget: this is research, which means, sometimes things simply do not work out. Experiments do not yield useful data, maybe they were not well thought out to begin with, theoretical approaches fail to yield any novel insight. I maintain, however, that there is a chance for students to grow into respectable science professionals, even working on doomed projects.
Exactly! And remember that graduate training involves (or should) supervision by a committee of faculty in addition to the primary mentor(s). They should easily be able to distinguish research which is not working out from a lack of effort. The committee should also be able to determine when the PI is making decisions about publication of data that is detrimental to the legitimate interests of the doctoral student, but this may be a big topic for another day.
And as Massimo also mentioned, the mere fact that a student has been fortunate to get "good" results and publish a few papers is no guarantee that she has received good training in how to be a scientist. Personally, I think having it too easy isn't all that great of preparation for the life of a Principal Investigator either. Again, the committee should be in position to realize that student Warshnozzle slotted into a productive vein in the lab but still has no intellectual engagement in the field to speak of, can't troubleshoot worth a lick and has learned only a limited number of technical skills.
Really, go read the original post, it is full of goodness.
The thing he missed, however, is the way that the expectation for publications, for first authorships and for "success" generally may influence scientific fraud and data faking. Again, I agree with Massimo that the point of graduate student training is not to generate experimental successes. It is more to learn how to be a scientist and in my view this does not require that one demonstrate wild success (in terms of publishable, hypothesis-testing, P less than oh-point-oh-five paper figures).
I recognize that stuff can happen. Yes, even in the course of a three-four year stint of training (after rotations). Sometimes a perfectly good scientist comes up blank. We know this because of our peers that have struggled mightily in one setting that have later gone on to regain their footing in another setting. I know of several such scientists personally. It is not just that I happen to like them and make excuses for their lack of success/productivity...it is that the record reflects that they are productive, roughly up to the standards of their subfields, in a different setting*.
And yet the meme is that "past productivity predicts future productivity". Experimental failure is personal; it is your failure as a scientist. A publication gap means that you, well, perish.
The more we insist on this within the Tribe of Science, the more we raise the stakes of poor experimental outcomes. I would argue that the less tolerant we are to failure, the more we encourage the faking of success.
*of course many unsuccessful individuals never get the chance to demonstrate their talents because an earlier lack of success pushes them off the career path in some way. I don't mean to imply that it is only the ones who do return to the track who are deserving.