Predictors of Grad School Publications

(by drugmonkey) Jan 11 2017

A new paper in PLoS ONE purports to report on the relationship between traditional graduate school selection factors and graduate school success.

Joshua D. Hall, Anna B. O’Connell, Jeanette G. Cook. Predictors of Student Productivity in Biomedical Graduate School Applications. 2017, PLoS ONE, Published: January 11, 2017
http://dx.doi.org/10.1371/journal.pone.0169121 [Publisher Link]

The setup:

The cohort studied comprised 280 graduate students who entered the BBSP at UNC from 2008-2010; 195 had graduated with a PhD at the time of this study (July 2016), 45 were still enrolled, and 40 graduated with a Master's degree or withdrew. The cohort included all of the BBSP students who matriculated from 2008-2010.

The major outcome measure:

Publications by each student during graduate school were quantified with a custom Python script that queried Pubmed (http://www.ncbi.nlm.nih.gov/pubmed) using author searches for each student's name paired with their research advisor's name. The script returned XML attributes (https://www.nlm.nih.gov/bsd/licensee/elements_alphabetical.html) for all publications and generated the number of first-author publications and the total number of publications (including middle authorship) for each student/advisor pair.

For analysis they grouped the students into bins of 3+, 1-2 or 0 first author pubs with a '0+' category for zero first-author pubs but at least one middle-author publication.

OMG! Nothing predicts graduate school performance (especially those evil, evil, biased - I mentioned evil, right? - standardized scores).

Yes, even people who score below the 50th percentile on quantitative or verbal GRE land first-author publications! (Apple-polishing GPA kids don't seem to fare particularly well, either, plenty of first author publications earned by the 3.0-3.5 riff-raff.)

Oh bai the wai...prior research experience doesn't predict anything either.

Guess what did predict first author publications? Recommendation scores. That's right. The Good Old Boys/Girls Club of biased recommendations from undergraduate professors is predictive of the higher producing graduate students.

As the authors note in the Discussion, this analysis focused only on student characteristics. It could not account for the mentor lab, interaction of student characteristics with the mentor lab characteristics and the like.

I'll let you Readers mull this one over for a bit but I was struck by one thing.

We may be talking at cross purposes when we discuss how application metrics are used to predict graduate student success because we do not have the same idea of success in mind.

This analysis suggests the primary measure of success of a graduate student is the degree to which they succeeded in being a good data-monkey who produces a lot of publishable stuff within the context of their given research laboratory. And by this measure, nothing is very predictive, going by the Hall et al analysis, except the recommendation letter of those who are trying to assess the whole package from their varied perspectives of "I know it when I see it*".

Grad student publication number is, of course, related to who will go on to be a success as a creative independent scientist because of the very common belief that past performance predicts future performance. Those who exit grad school with zero pubs are facing an uphill battle to attain a faculty position. Those with 3+ first author pubs will generally be assumed to be more in the hunt as a potential future faculty member all along the postdoctoral arc.

Assuming all else equal.

This is another way we talk past each other about standardized scores, etc.

The choice of the PI who is trying to select a graduate student for their lab can assume "all else equal". Approximately. Same lab, same basic environment. We don't have this information from Hall et al. and I think it would be pretty difficult to do the study in a way that used same-lab as a covariate. Not impossible...you just are going to need a very large boat.

I think of it this way. Maybe there are some labs where everyone gets 3 or more first-author papers? Maybe there are some where it takes a very special individual indeed to get more than one in the course of graduate school? And without knowing if the student characteristics determine the host lab, we have to assume random (ish) assignment. Thus it could be the case that the better GREquant, for example, gives a slight advantage within lab but this is wiped out by the variability between-labs.

The choice of a selection committee for graduate programs can be less confident about all else being equal. They have to ask what sort of student can be successful across all of the lab environments in the program. Or successful in the majority of them. The Hall et al. data say that many types can be. But we are still asking a question of whether the training environment is such an overwhelming factor that almost nothing about the individual matters. This seems to be the message

If so, why are we bothering to select students at all? Why have them apply with any details other than the recommendation letters?

Maybe this is another place we are speaking at cross purposes. Some of us may still believe that the point of graduate school selection is to train the faculty (or insert any other specific career outcome if relevant) of tomorrow. Part of the goal, therefore, may be to select people on the basis of who we think would be best at that future role**, regardless of the variation in papers generated with first-author credit as a graduate student.

Is the Hall et al. paper based on a straw notion of "success"?

I think you've probably noticed, Dear Reader, that my opinion is that the career of grant-funded PI takes some personality characteristics that are not easily captured by the number of first-author pubs as a graduate student. Grit and resilience. Intrinsic motivation to git-er-done. Awareness of the structural, career-type aspects. At least a minimal amount of interpersonal skills.

What I am not often on about is the fact that I think that given approximately equal conditions, smarts matters. This is not saying that smarts is the only thing. If you are smart as all heck and you don't have what it takes to be productive or to take a hit, you aren't going to do well. It's the flip side. If two people do have grit and resilience and motivation...the smarter person is going to have an easier time of it or achieve more for the same effort**. On average.

And this is a test that is not performed in the new paper. Figuring out how to compare outcomes within laboratory groups might be an advance on this question.

__
*When I write recommendation letters for undergrads who have worked with me I do not have access to their standardized scores or grades. I have my subjective impressions of their smarts and industry from their work in my lab to go by. That's it. Maybe other people formally review a transcript and scores before writing a letter? I doubt it but I guess that is possible.

**Regarding that future role, again it may be a question of what is most important for success. Within our own lab, we are assuming that differential opportunity to get publications is not a thing. So since this part of the environment is fixed, we should be thinking about what is going to lead to enhanced success down the road, given conceivable other environments. From the standpoint of a Program, the same? or do we just feel as though the best success in our Program is enough to ensure the best success in any subsequent environment? The way we look at this may be part of what keeps us talking past each other about what graduate selection is for.

18 responses so far

Advice for Prospective Graduate Students

(by drugmonkey) Jan 06 2017

There is a lot of great advice of the usual sort floating around - talk to current grad students and postdocs about Department, Program and Lab culture. Median time to completion*. So I won't repeat that.

But here's one thing you may not hear about.

Ask the Program Director for the past two 5-year reviews of the Program. Yes, graduate training programs get peer reviewed on a periodic basis. Every 5 years in my limited experience.

Ask to see the review. Absent that ask for the top five most serious criticisms. In fact you should ask this latter question if anyone who interviews you to get a sense of how much the Program is integrated vs ad hoc.

Here's another important question to ask the interviewing faculty: "Who are the most recent 5-10 faculty appointments to come from your Program alumni?" The key here is to ask it on the spot so they can't look it up.

The most important thing here will not be the actual-factual answers. It will be how the faculty respond to your inquiries.

Good luck.

--
*Please tell me every prospect asks about the median time to PhD?

UPDATE: I meant this as a step to take after you are invited to interview or offered admission. A step for you to take to help decide which program to attend. Although I suppose even if you only get one offer it is helpful to know what to expect or watch out for.

74 responses so far

Thought of the Day

(by drugmonkey) Jan 04 2017

21 responses so far

Tenured profs should pick up the check?

(by drugmonkey) Jan 03 2017

While I think generosity on the part of more senior scientists is a good thing, and should be encouraged, making this an obligation is flawed. How do you know what that person's obligations are?

I post this in case any PI types out there don't know this is a thing. If you can pick up a check or pay more than your share, hey great. Good for you.

But nobody should expect it of you.

26 responses so far

Happy 2017!

(by drugmonkey) Jan 01 2017

Happy New Year everyone!

Generate knowledge.

Act like a decent person.

Oppose and illuminate the indecent behavior that crosses your path.

3 responses so far

Cannabidiol is still Schedule I, where it has been for some time

(by drugmonkey) Dec 31 2016

The DEA has created a new drug
Code for cannabis extracts, leading to some feather fluffing in the advocacy press.

The Federal Register notice explaining this is pretty clear so I'm not seeing where the alleged confusion lies.

The part responding to prior comment makes the situation with cannabidiol (CBD) very explicit.

One comment requested clarification of whether the new drug code will be applicable to cannabidiol (CBD), if it is not combined with cannabinols.

DEA response: For practical purposes, all extracts that contain CBD will also contain at least small amounts of other cannabinoids.1 However, if it were possible to produce from the cannabis plant an extract that contained only CBD and no other cannabinoids, such an extract would fall within the new drug code 7350. In view of this comment, the regulatory text accompanying new drug code 7350 has been modified slightly to make clear that it includes cannabis extracts that contain only one cannabinoid.

CBD has been on the Schedule for quite some time as far as I know. It is listed specifically on the application for a researcher license. You won't be able to buy it from a legitimate scientific reagent company such as Sigma without a DEA license. Very hard to miss.

I am aware of some very dodgy stuff going on with CBD for the quack supplement industry. From what I can tell, some of these companies are importing pure CBD under cover of "industrial hemp". Hemp is defined by lack of delta9-THC content, of course. Making "hemp" that contains high levels of the clearly Scheduled CBD a very gray area. It will be interesting to see if part of the outcome of this new extracts code will be invigorated prosecution of these CBD supplement companies.

2 responses so far

Ethics reminder for scientists

(by drugmonkey) Dec 31 2016

If the lab head tells the trainees or techs that a specific experimental outcome* must be generated by them, this is scientific misconduct.

If the lab head says a specific experimental outcome is necessary to publish the paper, this may be very close to misconduct or it may be completely aboveboard, depending on context. The best context to set is a constant mantra that any outcome teaches us more about reality and that is the real goal.

--
*no we are not talking about assay validation and similar technical development stuff.

15 responses so far

Cannabis hyperemesis syndrome rates increase with marijuana legalization

(by drugmonkey) Dec 31 2016

A report by CBS News reports on a 2015 paper:

Howard S. Kim, MD, John D. Anderson, MD, Omeed Saghafi, MD, Kennon J. Heard, MD, PhD, and Andrew A. Monte, MD Cyclic Vomiting Presentations Following Marijuana Liberalization in Colorado. Acad Emerg Med. 2015 Jun; 22(6): 694–699.
Published online 2015 Apr 22.
[pubmed

From the Abstract:


The authors reviewed 2,574 visits and identified 36 patients diagnosed with cyclic vomiting over 128 visits. The prevalence of cyclic vomiting visits increased from 41 per 113,262 ED visits to 87 per 125,095 ED visits after marijuana liberalization, corresponding to a prevalence ratio of 1.92 (95% confidence interval [CI] = 1.33 to 2.79). Patients with cyclic vomiting in the postliberalization period were more likely to have marijuana use documented than patients in the preliberalization period (odds ratio = 3.59, 95% CI = 1.44 to 9.00).

For background on the slow, Case Report driven appreciation that a chronic cyclical vomiting syndrome can be caused by cannabis use, see blog posts here, here, here.

The major takeaway message is that when physicians or patients are simply aware that there is this syndrome, diagnosis can be more rapid and a lot less expensive. Patients can, if they are able to stop smoking pot, find relief more quickly.

As far as the present report showing increasing rates in CO, well, this is interesting. Consistent with a specific causal relationship of cannabis use to this hyperemesis syndrome. But hard to disentangle growing awareness of the syndrome from growing incidence of it. We'll just have to follow these relationships as more states legalize medical and recreational marijuana.

Additional coverage from Dirk Hansen.

No responses yet

Finishing projects

(by drugmonkey) Dec 30 2016

If you are paid by the taxpayers, or generous private philanthropists, of your country to do science, you owe them a product. An attempt to generate knowledge. This is one of the things that orients much of my professional behavior, as I think I make clear on this blog.

If you haven't published your scientific work, it doesn't exist. This is perhaps an excessive way to put it but I do think you should try to publish the work you accomplish with other people's money.

Much of my irritation with the publication game, prestige chasing, delusions of complete stories, priority / scooping fears and competition for scarce funding resources can be traced back to these two orienting principles of mine.

My irritation with such things does not, however, keep them from influencing my career. It does not save me from being pressured not to give the funders their due.

It is not unusual for my lab, and I suspect many labs, to have thrown a fair amount of effort and resources into a set of investigations and to realize a lot more will be required to publish. "Required", I should say because the threshold for publication is highly variable.

Do I throw the additional resources into an effort to save what is half or three-quarters of a paper? To make the project to date publishable? I mean, we already know the answer and it is less than earth shaking. It was a good thing to look into, of course. Years ago a study section of my peers told us so to the tune of a very low single digit percentile on a grant application. But now I know the answer and it probably doesn't support a lot of follow-up work.

Our interests in the lab have moved along on several different directions. We have new funding and, always, always, future funding to pursue. Returning to the past is just a drag on the future, right?

I sometimes feel that nobody other than me is so stupid as to remember that I owe something. I was funded by other people's money to follow a set of scientific inquiries into possible health implications of several things. I feel as though I should figure out how to publish the main thing(s) we learned. Even if that requires some additional studies be run to make something that I feel is already answered into something "publishable".

21 responses so far

Twelve Months of Drugmonkey (2016)

(by drugmonkey) Dec 22 2016

I've been doing these year-end summaries for quite some time now. Previously I've posted a link to the first post of every month. For this year I'm going to shake it up and post the last entry of the month.

Jan: In the NIH extramural grant funding world the maximum duration for a project is 5 years.

Feb: There are these moments in science where you face a decision...Am I going to be the selfish asshole here?

Mar: Jocelyn Kaiser reports that some people who applied for MIRA person-not-project support from NIGMS are now complaining.

Apr: The Ramirez Group is practicing open grantsmanship by posting "R01 Style" documents on a website.

May: By now most of you are familiar with the huge plume of vapor emitted by a user of an e-cigarette device on the streets.

Jun: A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science.

Jul: The other lesson to be drawn from recent political events and applied to science careers is not to let toxic personalities drive the ship.

Aug: From the NYT account of the shooting of Dennis Charney:

Sep: The NIH FOAs come in many flavors of specificity.

Oct: Imagine that the New Investigator status (no prior service as PI of major NIH grant) required an extra timeline document?

Nov: So. A federal judge* managed to put a hold on Obama's move to increase the threshold for overtime exemption.

Dec: If you love the NIH and its mission, your mantra for the next four years is a simple one.

__
[2015][2014][2012][2011][2010][2009][2008]

3 responses so far

« Newer posts Older posts »