But clearly the laboratory based male scientists would never harass their female subordinates.
Field science is bad.
Lab science is good.
This is what the head of the Office of Extramural Research at the NIH seems to think.
But clearly the laboratory based male scientists would never harass their female subordinates.
Field science is bad.
Lab science is good.
This is what the head of the Office of Extramural Research at the NIH seems to think.
As I noted in a prior post, the Cycle I NIH Grant awards (submitted in Feb-Mar, Reviewed Jun-July, Council Aug) with a first possible funding date of December 1 hardly ever are funded on time. This is due to Congress never passing a budget for the Fiscal Year that starts in October on time. The Congress sometimes goes into a stop-gap measure, like Continuing Resolution, which theoretically permits Federal agencies to spend along the parameters of the past year's budget. I find that NIH ICs of my greatest interest are highly conservative and never* fund new grants in December. The ICs that I follow almost inevitably wait until late Jan when Congress returns from their winter recess to see if they will do something more permanent.
New Cycle I grants then start trickling out in Feb, again, typically.
This year one of my favorite ICs, namely NIDA, has only just issued new Cycle I grants** this week, they hit RePORTER today.
March friggin 9th.
Six new R01 awards. Three K01, three K99s, one R15, one "planning grant" and three SBIR.
Even this is just a trickle, compared to what they should be funding for one of their major Cycles. I anticipate there will be a lot more coming out over the next couple of weeks so that they can (hopefully?) clear the decks for the Cycle II awards that are supposed to fund April 1.
I pity all those poor PIs out their waiting, just waiting, for their awards to fund. I cannot imagine why NIDA chooses to do this instead of at least trickling out the best score awards and the stuff they KNOW they are going to fund, way back in December***.
*Statistically undifferentiable from never
**You can tell by clicking on the individual awards and you'll see that they (R01s anyway) end Nov 30 or Dec 31 for the initial round of funding. These are Cycle I, not upjumped Cycle II.
***Some ICs do tend to fund a few new awards in December, no matter what the status of Congress' activity on a budget.
I asked about your experiences with the transition from Master's programs to PhD granting programs because of the NIH Bridges to the Doctorate (R25) Program Announcement. PAR-16-109 has the goal:
... to support educational activities that enhance the diversity of the biomedical, behavioral and clinical research workforce. . ...To accomplish the stated over-arching goal, this FOA will support creative educational activities with a primary focus on Courses for Skills Development and Research Experiences.
Good stuff. Why is it needed?
Underrepresentation of certain groups in science, technology and engineering fields increases throughout the training stages. For example, students from certain racial and ethnic groups including, Blacks or African Americans, Hispanics or Latinos, American Indians or Alaska Natives, Native Hawaiians and other Pacific Islanders, currently comprise ~39% of the college age population (Census Bureau), but earn only ~17% of bachelor’s degrees and ~7% of the Ph.D.’s in the biological sciences (NSF, 2015). Active interventions are required to prevent the loss of talent at each level of educational advancement. For example, a report from the President’s Council of Advisors on Science and Technology recommended support of programs to retain underrepresented undergraduate science, technology, engineering and math students as a means to effectively build a diverse and competitive scientific workforce (PCAST Report, 2012).
[I actually took this blurb from the related PAR-16-118 because it had the links]
And how is this Bridges to the Doctorate supposed to work?
The Bridges to Doctorate Program is intended to provide these activities to master's level students to increase transition to and completion of PhDs in biomedical sciences. This program requires partnerships between master's degree-granting institutions with doctorate degree-granting institutions.
Emphasis added. Oh Boy.
God forbid we have a NIH program that doesn't ensure that in some way the rich, already heavily NIH-funded institutions get a piece of the action.
So what is it really supposed to be funding? Well first off you will note it is pretty substantial "Application budgets are limited to $300,000 direct costs per year.The maximum project period is 5 years". Better than a full-modular R01! (Some of y'all will also be happy to note that it only comes with 8% overhead.) It is not supposed to fully replace NRSA individual fellowships or training grants.
Research education programs may complement ongoing research training and education occurring at the applicant institution, but the proposed educational experiences must be distinct from those training and education programs currently receiving Federal support. R25 programs may augment institutional research training programs (e.g., T32, T90) but cannot be used to replace or circumvent Ruth L. Kirschstein National Research Service Award (NRSA) programs.
That is less than full support to prospective trainees to be bridged. So what are the funds for?
Salary support for program administration, namely, the PDs/PIs, program coordinator(s), or administrative/clerical support is limited to 30% of the total direct costs annually.
Support for faculty from the doctoral institution serving as visiting lecturers, offering lectures and/or laboratory courses for skills development in areas in which expertise needs strengthening at the master’s institution;
Support for faculty from the master’s degree-granting institution for developing or implementing special academic developmental activities for skills development; and
Support for faculty/consultants/role models (including Bridges alumni) to present research seminars and workshops on communication skills, grant writing, and career development plans at the master’s degree institution(s).
Cha-ching! One element for the lesser-light Master's U, one element for the Ivory Tower faculty and one slippery slope that will mostly be the Ivory Tower faculty, I bet. Woot! Two to one for the Big Dogs!
Just look at this specific goal... who is going to be teaching this, can we ask? The folks from the R1 partner-U, that's who. Who can be paid directly, see above, for the effort.
1. Courses for Skills Development: For example, advanced courses in a specific discipline or research area, or specialized research techniques.
White Man's R1 Professor's Burden!
Okay, now we get to the Master's level student who wishes to benefit. What is in it for her?
2. Research Experiences: For example, for graduate and medical, dental, nursing and other health professional students: to provide research experiences and related training not available through formal NIH training mechanisms; for postdoctorates, medical residents and faculty: to extend their skills, experiences, and knowledge base.
Yeah! Sounds great. Let's get to it.
Participants may be paid if specifically required for the proposed research education program and sufficiently justified.
"May". May? MAY???? Wtf!??? Okay if this is setting up unpaid undergraduate style "research experience" scams, I am not happy. At all. This better not be what occurs. Underrepresented folks are disproportionally unlikely to be able to financially afford such shenanigans. PAY THEM!
Applicants may request Bridges student participant support for up to 20 hours per week during the academic year, and up to 40 hours/week during the summer at a pay rate that is consistent with the institutional pay scale.
That's better. And the pivot of this program, if I can find anything at all to like about it. Master's students can start working in the presumably higher-falutin' laboratories of the R1 partner and do so for regular pay, hopefully commensurate with what that University is paying their doctoral graduate students.
So here's what annoys me about this. Sure, it looks good on the surface. But think about it. This is just another way for fabulously well funded R1 University faculty to get even more cheap labor that they don't have to pay for from their own grants. 20 hrs a week during the school year and 40 hrs a week during the summer? From a Master's level person bucking to get into a doctoral program?
Sign me the heck UP, sister.
Call me crazy but wouldn't it be better just to lard up the underrepresented-group-serving University with the cash? Wouldn't it be better to give them infrastructure grants, and buy the faculty out of teaching time to do more research? Wouldn't it be better to just fund a bunch more research grants to these faculty at URGSUs? (I just made that up).
Wouldn't it be better to fund PIs from those underrepresented groups at rates equal to good old straight whitey old men at the doctoral granting institution so that the URM "participants" (why aren't they trainees in the PAR?) would have somebody to look at and think maybe they can succeed too?
But no. We can't just hand over the cash to the URGSUs without giving the already-well-funded University their taste. The world might end if NIH did something silly like that.
Neuroscientist Bita Moghaddam asked a very interesting question on Twitter but it didn't get much discussion yet. I thought I'd raise it up for the blog audience.
— Bita Moghaddam (@bita137) February 22, 2016
My immediate thought was that we should first talk about the R13 Support for Scientific Conferences mechanism. These are often used to provide some funding for Gordon Research Conference meetings, for the smaller society meetings and even some very small local(ish) conferences. Examples from NIDA, NIMH, NIGMS. I say first because this would seem to be the very easy case.
NIH should absolutely keep a tight eye on gender distribution of the meetings supported by such grant awards.The FOA reads, in part:
Additionally, the Conference Plan should describe strategies for:
Involving the appropriate representation of women, minorities, and persons with disabilities in the planning and implementation of, and participation in, the proposed conference.
Identifying and publicizing resources for child care and other types of family care at the conference site to allow individuals with family care responsibilities to attend.
so it is a no-brainer there, although as we know from other aspects of NIH the actual review can depart from the FOA. I don't have any experience with these mechanisms personally so I can't say how well this particular aspect is respected when it comes to awarding good (fundable) scores.
Obviously, I think any failure to address representation should be a huge demerit. Any failure to achieve representation at the same, or similar meeting ("The application should identify related conferences held on the subject during the past 3 years and describe how the proposed conference is similar to, and/or different from these."), should also be a huge demerit.
At least as far as this FOA for this scientific conference support mechanism goes, the NIH would appear to be firmly behind the idea that scientific meetings should be diverse.
By extension, we can move on to the actual question from Professor Moghaddam. Should we use the additional power of travel funds to address diversity?
Of course, right off, I think of the ACNP annual meeting because it is hands down the least diverse meeting I have ever attended. By some significant margin. Perhaps not in gender representation but hey, let us not stand only on our pet issue of representation, eh?
As far as trainees go, I think heck no. If my trainee wants to go to any particular meeting because it will help her or him in their careers, I can't say no just to advance my own agenda with respect to diversity. Like it or not, I can't expect any of them to pay any sort of price for my tender sensibilities.
Myself? Maybe. But probably not. See the aforementioned ACNP. When I attend that meeting it is because I think it will be advantageous for me, my lab or my understanding of science. I may carp and complain to certain ears that may matter about representation at the ACNP, but I'm not going on strike about it.
Other, smaller meetings? Like a GRC? I don't know. I really don't.
I thank Professor Moghaddam for making me think about it though. This is the start of a ponder for me and I hope it is for you as well.
As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.
Simplification! Cool, right?
There's a landmine here.
For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.
The notice says:
Summary of Changes
The VAS criteria are simplified by the following changes:
A description of veterinary care is no longer required.
Justification for the number of animals has been eliminated.
A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.
This means that if I continue with my current strategy, I'm going to start seeing complaints about "where is the power analysis" and "hey buddy, stop trying to evade page limits by putting it in the VAS".
So back to the old way we must go. Leave space for your power analysis, folks.
If you don't know much about doing a power analysis, this website is helpful: http://homepage.stat.uiowa.edu/~rlenth/Power/
An email from the CSR of the NIH hit
late yesterday a few days ago, pointing to a number of their Peer Review Notes including one on the budget bump that we are about to enjoy.
Actually that should be "some selected few of us will enjoy" because
“While $2 billion is a big increase, it is less than a 10 percent increase, and a large portion of it is set aside for specific areas and initiatives,” said Dr. Nakamura. “Competition for funding is still going to be intense, and paylines will not return to historic averages . . .
Yeah, as suspected, that money is already accounted for.
The part that has me fired up is the continuation after that ellipsis and a continuing header item.
So make sure you put your best effort into your application before you apply.”
“We know some research deans have quotas and force their PIs to submit applications regularly,” said Dr. Nakamura. “It’s important for them to know that university submission rates are not correlated with grant funding. Therefore, PIs should be encouraged to develop and submit applications as their research and ideas justify the effort to write them and have other scientists review them.”
As usual I do not know if this is coming from ignorance or calculated strategy to make their numbers look better. I fear both possibilities. I'm going from memory here because I can't seem to rapidly find the related blog post or data analysis but I think I recall an illustration that University-total grant submission rates did not predict University-total success rates.
At a very basic level Nakamura is using the lie of the truncated distribution. If you don't submit any grant applications, your success rate is going to be zero. I'm sure he's excluding those because seemingly that would make a nice correlation.
But more importantly, he is trying to use university-wide measures to convince the individual PI what is best for her to do.
Wrong. Wrong. Wrong.
Not everyone's chances at that institution are the same. The more established investigators will probably, on average, enjoy a higher success rate. They can therefore submit fewer applications. Lesser folk enjoy lower success rates so therefore they have to keep pounding out the apps to get their grants.
By extension, it takes very little imagination to understand that depending on your ratio of big important established scientists to noobs, and based somewhat on subfields, the apparent University-wide numbers are going to swamp out the information that is needed for each individual PI.
In short, this is just another version of the advice to young faculty to "write better grants, just like the greybeards do".
The trick is, the greybeards DO NOT WRITE BETTER GRANTS! I mean sure, yes, there is a small experience factor there. But the major driver is not the objective quality but rather the established track record of the big-deal scientist. This gives them little benefits of the doubt all over the place as we have discussed on this blog endlessly.
I believe I have yet to hear from a new-comer to NIH grant review that has not had the experience within 1-2 rounds of a reviewer ending his/her review of a clearly lower-quality grant proposal with "....but it's Dr. BigShot and we know she does great work and can pull this off". Or similar.
I have been on a study section round or two in my day and I am here to tell you. My experience is not at all consistent with the idea that the "best" grants win out. Merit scores are not a perfect function of objective grant quality at all. Imperfectly prepared or boring grants get funded all the time. Really exciting and nearly-perfect grants get unfundable scores or triaged. Frequently.
This is because grant review hinges on the excitement of the assigned reviewers for the essence of the project. All else is detail.
You cannot beat this system by writing a "perfect" grant. Because it may not be perfect for all three reviewers no matter how well it has been prepared and how well vetted by whatever colleagues you have rounded up to advise you.
Nakamura should know this. He probably does. Which makes his "advice" a cynical ploy to decrease submissions so that his success rate will look better.
One caveat: I could simply be out of touch with all of these alleged Dean-motivated crap apps. It is true that I have occasionally seen people throw up grant applications that really aren't very credible from my perspective. They are very rare. And it has occasionally been the case that at least one other reviewer liked something about an application I thought was embarrassingly crappy. So go figure.
I also understand that there are indeed Deans or Chairs that encourage high submission rates and maybe this leads to PIs writing garbage now and again. But this does not account for the dismal success rates we are enjoying. I bet that magically disappearing all apps that a PI submitted to meet institutional vigor requirements (but didn't really mean to make a serious play for an award) would have no perceptible effect on success rates for the rest of us. I just haven't ever seen enough non-credible apps for this to make a difference. Perhaps you have another experience on study section, DearReaders?
Finally, I really hate this blame-the-victim attitude on the part of the CSR and indeed many POs. There are readily apparent and demonstrable problems with how some categories of PIs' grants are reviewed. Newer and less experienced applicants. African-American PIs. Women. Perhaps, although this is less well-explicated lately, those from the wrong Universities.
For the NIH to avoid fixing their own problems with review (for example the vicious cycle of study sections punishing ESI apps with ever-worsening scores when the NIH used special paylines to boost success rates) and then blame victims of these problems by suggesting they must be writing bad grants takes chutzpah. But it is wrong. And demoralizing to so many who are taking it on the chin in the grant review game.
And it makes the problems worse. How so? Well, as you know, Dear Reader I am firmly convinced that the only way to succeed in the long term is to keep rolling the reviewer dice, hoping to get three individuals who really get what you are proposing. And to take advantage of the various little features of the system that respond to frequent submissions (reviewer sympathy, PO interest, extra end of year money, ARRA, sudden IC initiatives/directions, etc). Always, always you have to send in credible proposals. But perfect vs really good guarantees you nothing. And when perfect keeps you from submitting another really good grant? You are not helping your chances. So for Nakamura to tell people to sacrifice the really good for the perfect he is worsening their chances. Particularly when the people are in those groups who are already at a disadvantage and need to work even harder* to make up for it.
*Remember, Ginther showed that African-American PIs had to submit more revisions to get funded.
In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.
The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.
I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.
Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?
Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.
Quoted bits are from my prior post.
Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.
We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?
Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.
Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?
Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.
This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?
Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?
My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?
It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).
I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.
I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".
I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.
I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.
Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?
DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).
**and also a pronounced lack of success renewing projects to go with it.
***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.
Should people without skin in the game be allowed to review major research grants?
I mean those who are insulated from the results of the process. HHMI stalwarts, NIH intramural, national labs, company scientists...
On one hand, I see argument that they provide needed outside opinions. To keep an insular, self-congratulating process honest.
On the other, one might observe that those who cannot be punished for bad behavior have license to be biased, jerky and driven by personal agenda.
Would you prefer review by those who are subject to the funding system? Or doesn't it matter?
Scientifically, that is.
it's hard when sr. PIs ask you where you hope your lab is in 5 yrs because the honest answer is: I hope I'm still here (1)
— Zoe McElligott (@nanopharmNC) January 12, 2016
I like the answer Zoe gave for her own question.
I, too, just hope to be viable as a grant funded research laboratory. I have my desires but my confidence in realizing my goals is sharply limited by the fact I cannot count on funding.
Edited to add:
When I was a brand new Assistant Professor I once attended a career stage talk of a senior scientist in my field. It wasn't an Emeritus wrap-up but it was certainly later career. The sort of thing where you expect a broad sweeping presentation of decades of work focused around a fairly cohesive theme.
The talk was "here's the latest cool finding from our lab". I was.....appalled. I looked over this scientist's publication record and grant funding history and saw that it was....scattered. I don't want to say it was all over the place, and there were certain thematic elements that persisted. But this was when I was still dreaming of a Grande Arc for my laboratory. The presentation was distinctly not that.
And I thought "I will be so disappointed in myself if I reach that stage of my career and can only give that talk".
I am here to tell you people, I am definitely headed in that direction at the moment. I think I can probably tell a slightly more cohesive story but it isn't far away.
I AM disappointed. In myself.
And of course in the system, to the extent that I think it has failed to support my "continuous Grande Arc Eleventy" plans for my research career.
But this is STUPID. There is no justifiable reason for me to think that the Grande Arc is any better than just doing a good job with each project, 5 years of funding at a time.
I am still not entirely sure this is not an elaborate joke.
The purpose of the NCI Predoctoral to Postdoctoral Fellow Transition Award (F99/K00) is to encourage and retain outstanding graduate students who have demonstrated potential and interest in pursuing careers as independent cancer researchers. The award will facilitate the transition of talented graduate students into successful cancer research postdoctoral appointments, and provide opportunities for career development activities relevant to their long-term career goals of becoming independent cancer researchers.
The need for a transition mechanism that graduate students can apply for is really unclear to me.
Note: These are open to non-citizens on the appropriate visa. This is unlike the NRSA pre- and post-doc fellowships.