I have recently noticed some fatalism among some of our junior colleagues--post-docs and recently independent PIs--concerning their prospects of completing "interesting" projects and getting them published in top journals, either field-specific or C/N/S-level. For example, Sciencewoman recently posted about her feelings of inadequacy triggered by a more junior colleagues recent publication in a C/N/S-level journal:
Why is it that the other guy is getting a very high profile paper and I'm struggling to get results that will merit publication at all?
And her first answer (among others) was as follows:
He's luckier than me. He got a project that worked.
The take home message of this DrugMonkey post is that "luck"--whatever the fuck that word even means--is only one factor among many. And the other factors are much more within the control of the scientist. To see what these factors are, and how to take control of them, jump below the fold. (Also below the fold is an update that addresses management of multiple projects to diversify risk/reward.)
As an entry point for further discussion, here are the rest of the potential answers Sciencewoman adduces for her question:
He dared to submit to C/N/S.
He's got more resources to throw at the project than I do.
He worked harder than me. Hard work begets rewards. (Sure would be nice if that were dependable.)
His project has simply had more time devoted to it. Science like wine gets better with time. (Or not. My project seems to diminish with age.)
His co-authors are bigger names than mine. They've published in C/N/S before.
The reason he has big-name co-authors is that he had and used better connections than I did when we were at the grad school hunting stage of the game.
He hasn't made choices that compromised balanced his professional aspirations against a spouse, and now a child.
It makes no difference to him whether one of his co-authors doesn't like babies and maternity leave. It did to me.
He's smarter than me. (But I don't believe that.)
Now I am not saying in any way that some of these factors--mostly, but not completely, out of the control of any particular scientist--are irrelevant. But there are other factors that are even more important and are much more under the control of each scientist. They all relate to strategic planning, which I define here as the process of choosing both a scientific problem to address and appropriate methodological approaches for addressing it.
First, how do you choose a scientific problem to address? If your goal is, in part, to publish your work in a top field-specific or C/N/S-level journal, then whether the problem is interesting to you is only the beginning of the inquiry. You need to ask whether the problem is one that is central to your field (or, possibly, subfield), and one that you know to be of great interest to the field (or, possibly, subfield) as a whole, based on what currently exists in the peer-reviewed literature, reviews, commentary, and what you suss out at conferences.
Thus, another possible answer to Sciencewoman's question is that her junior colleague with his PI mentor surveyed the state of their field and chose a scientific problem to address that they explicitly determined to be of importance. Of course, junior scientists cannot do this without the guidance and participation of their mentors, so a failure on this count is, at least in part, a failure of mentorship. This is what is meant by "scientific taste", and without it, it will be difficult to publish in good journals, secure independent PI positions, and obtain grant support.
Second, how do you identify appropriate methodological approaches to addressing your problem? In answering this question, let's use as a starting point two excerpts from a recent comment by MsPhD at Dr. J's place:
For better or worse, I tend to go where the interesting questions are, whether it's what I'm good at or not. Sometimes that means I have to work a lot harder to get the answer.
This represents poor judgment on MsPhD's part and, more importantly, her post-doctoral mentor's part.
Before new experiments are embarked upon in the lab, one must engage in an explicit decision-making process in which one assesses the costs and benefits of doing so. This analysis includes weighing, inter alia, the following factors:
(1) the potential best-case payoff of the experiments (this is what MsPhD would call identifying "where the interesting questions are");
(2) the intrinsic difficulty of the experiments;
(3) how much experience we already have in the required experimental approaches (do they play to our methodological strengths?);
(4) how long will it take to perform the experiments;
(5) how likely is it that the experiments will end up inconclusive;
(6) how likely is it that our competitors are on the same path;
(7) are there better experimental approaches to the same question;
(8) are there even more important questions that would have to be forgone in order to address the one(s) under consideration.
Identifying "where the interesting questions are" is only the beginning of the process of deciding where to direct one's experimental efforts.
Most of the people I know who have 'excellent hands' were just lucky enough to work on things that are either easier than what I do, or things that suit their particular talents perfectly. But there is some luck involved, or they've deliberately avoided things that they think will be too hard to do. Sometimes it is a wise choice.
Having "excellent hands" is not "luck" (whatever that even means). It is the outcome of a decision-making process that takes account of the costs and benefits outlined above. Failing to capitalize on existing methodological expertise is an error of judgment.
And if the perception is that existing methodological expertise is not suited to the "interesting questions", then the error of judgment occurred earlier in spending the time and effort developing that expertise in the first place.
Sustained scientific success requires at least some foresight to stay in front of the methodological waves that course through a field. One of the ways to do this is to actually develop novel methodologies that can be used to address interesting questions.
Papers in top field-specific and C/N/S-level journals almost always involve some combination of an answer to a generally important question in a field and development/application of a generally applicable novel methodology for addressing that and related questions. Strategic planning for publishing in these kinds of journals requires explicit consideration of importance of question and methodological approaches. It is hardly "luck".
UPDATE: In comments, Sciencewoman poses the following question:
Aside from testing hypotheses where you already know the outcome (and therefore not really pushing the envelope of knowledge), how do you factor out luck in determining whether or not your novel hypothesis pans out and you get a high-profile pub?
This raises a key aspect of strategic planning that should have gone into the body of the post (and will, via an update). Every laboratory, and every scientist within a laboratory, needs to have an explicit plan for managing risk/reward.
Just as financial investment fund managers do not put all of their assets into low-risk/low-yield US Treasury notes, they also do not put all of their assets into high-risk/high-yield junk bonds. Rather, they try to put together a balanced portfolio that combines the guaranteed, but modest, returns of low-risk/low-yield investments with exposdure to high-risk/high-yield investments.
This is exactly what every PI, and every scientist training under that PI, needs to do. An effective scientific portfolio contains a balance of low-risk/low-reward projects with a very predictable, but modest, outcome and high-risk/high-reward projects with an uncertain, but potentially high-impact, outcome. And because the latter are, by definition, uncertain, if one wants to maximize the likelihood of a high-impact payoff, one needs to expose oneself to more than one high-risk/high-reward scenario at a time.
Corollary to this kind of analysis is developing the judgment to know when to "cut bait" on a particular high-risk/high-reward project that appears unlikely to pay off, on the one hand, and when to "go all in" with effort on some other particular high-risk/high-reward project that appears likely to pay off, on the other.