Think of the children of Ferguson, Missouri

(by drugmonkey) Nov 24 2014

I am not surprised but I am disappointed. The grand jury convened to consider the shooting of Michael Brown in Ferguson, Missouri, by Darrin Wilson has decided there are no grounds for a trial.

There is one tiny, but undeniably tangible, thing that I can do to register my feelings from afar.

Searching by Zip Code 63135 at Donors Choose I found quite a few hits on project proposals from the teachers of Ferguson.

I invite you to join me in donating to help the school children of Ferguson further their education.

Mrs. Hicks' third grade classroom at Ferguson Central Elementary needs a rug for children to sit on for circle time. A rug.

Mr. Eye has to teach children at Ferguson Middle school in two classrooms at once.

Now my students in my second room have to cram around the doorway or in the other room and watch my instruction and then go back and try to remember how it was done unlike the students who can see the board form their seats and can follow along with instruction as I go through group activities. I utilize this method of instruction 50% of the time as the other 50% is project based. My students who are in a separate room because of space problems are at a disadvantage and have less time to work as they have to ask questions multiple times because they can not follow along as I give instruction, tips, and address concerns.

Mr Eye is asking for a little technological upgrading to help the poor unfortunate kids trying to learn in these circumstances. It's the US in 2014 people.

Ms. Milliano's Books project at Walnut Grove Elementary School is one that will knock you down with its topicality.

The students in our school come from low socioeconomic households, usually headed by single mothers. Many of our students are not exposed to matter and events on a national or global level. Most of the students in our building have little contact with kids their age beyond those in our school district.
...
We would use these news magazines to increase awareness of global and national current events, and promote discussion on how such events impact them. We will use the math series to demonstrate the importance and use of math in everyday life. Most importantly we will use the magazines to show the lives and accomplishments of people of their own age.


Update: ONWARD!

Mrs. Randoll's students at Walnut Grove Elementary School need help learning math.

I am so excited about these number and shape manipulatives because they are items my students can, and will, use each day. My students will use these during Math Work Stations to build math skills such as: number identification, rote counting, number fluency, and sorting.

Mrs. Linder's Technology project at Airport Elementary School in Berkeley, MO is devoted to children with significant challenges in addition to underfunded school systems and general socio-economic disparity.

I work with students with Individualized Eduction Plans with a variety of diagnoses including Autism, Intellectual Disability, ADHD, and many others. We work on skills to be successful in the school setting such as handwriting, cutting, feeding, and self care skills. These students are multi-sensory learners who benefit from repetition and learning in a variety of ways.

...and just like the circle time rug, I'm tearing up again. Help if you can.

The cello has the most beautiful sound of all the strings. Mrs. Burke's music program at Berkeley Middle School could use a rack to secure the instruments. And dare she ask? A new upright bass?

Our school in the Ferguson-Florissant School District serves mostly students who are below the poverty line. They rarely have their own instruments. Young musicians use district instruments, some of which have been in the district since the 80's. They rent them for $25.00/year. Some continue to rent instruments for the 10 years that they are in the orchestra program. This is difficult for many of our parents.

17 responses so far

Expertise versus consistency

(by drugmonkey) Nov 24 2014

In NIH grant review the standing study section approach to peer review sacrifices specific expertise for the sake of consistency of review.

When each person has 10 R01s to review, the odds are that he or she is not the most specifically qualified person for all 10 are high.

The process often brings in additional panel members to help cover scientific domains on a per-meeting basis but this is only partially effective.

The Special Emphasis Panel can improve on this but mostly it does so because the scope of the applications under review is narrower. Typically the members of an SEP still have to stretch a bit to review some of the assignments.

Specific expertise sounds good but can come at the cost of consistency. Score calibration is a big deal. You should have seen the look of horror on my face at dinner following my first study section when some guy said "I thought I was giving it a really good a
score...you guys are telling me that wasn't fundable?"

Imagine a study section with a normal sized load of apps in which each reviewer completes only one or two reviews. The expertise would be highly customized on each proposal but there might be less consistency and calibration across applications.

What say you, Dear Reader? How would you prefer to have your grants reviewed?

17 responses so far

SFN 2014 Is Over

(by drugmonkey) Nov 20 2014

I woke up two hours early today with brain obsessing over our next research priorities, thanks to the meeting. Working as intended then.

For some reason I didn't get around to visiting a single exhibitor other than NIH. First time for everything, right?

It is really great to see so many of the online people I've met through blogging and to see them succeeding with their science and careers.

The postdocs who have left our department in recent years for faculty jobs are kicking all kinds of science booty and that is nice to see.

Talk to Program, talk to Program, talk to Program.......

Catching up with the science homie(s) that you've known since postdoc or grad school is good for the soul. Dedicate one night for that.

Don't bad talk anyone in the hearing of relative strangers.....really, you can't know who likes and respects who and science is very small. I know 30,000 attendees makes you think it is large but....it isn't.

Gossip about who is looking to find a new job....see above.

I ran into the AE who decided not to bother finding reviewers for our paper whilst at SfN and heroically, HEROICALLY people, managed not to demand immediate action.

A little bummed I missed the Backyard Brains folks this year...anybody see what shenanigans they are up to now?

You know when you go over to meet and butter up some PI, trainees? Don't worry, it's awkward from our end too.

2 responses so far

Thought of the Day

(by drugmonkey) Nov 18 2014

It turns out that trolling someone else's lab from a meeting with the cool study you just thought of that THEY NEED TO GET ON RIGHT NOW is even better than doing it to your own lab.

4 responses so far

What do you know, the NIH has not solved the revision-queing, traffic holding pattern problem with grant review.

(by drugmonkey) Nov 14 2014

Way back in 2008 I expressed my dissatisfaction with the revision-cycle holding pattern that delayed the funding of NIH grants.

Poking through my pile of assignments I find that I have three R01 applications at the A2 stage (the second and "final" amendment of a brand new proposal). Looking over the list of application numbers for the entire panel this round, I see that we have about 15% of our applications on the A2 revision.

Oi. What a waste of everyone's time. I anticipate many reviewers will be incorporating the usual smackdown-of-Program language. "This more than adequately revised application...."

I am not a fan of the NIH grant revision process, as readers will have noticed. Naturally my distaste is tied to the current era of tight budgets and expanding numbers of applications but I think the principles generalize. My main problem is that review panels use the revision process as a way of triaging the review process. This has nothing to do with selecting the most meritorious applications for award and everything to do with making a difficult process easier.

ReviewBiasGraph1The bias for revised applications is supported by funding data, round-after-round outcome in my section as well as supporting anecdotes from my colleagues who review. ... What you will quickly notice is that only about 10% of applications reviewed in normal CSR sections get funded without being revised. ... If you care to step back Fiscal Year by Fiscal Year in the CRISP [RePORTER replaced this- DM] search, you will notice the relative proportions of grants being funded at the unrevised (-01), A1 and A2 stages have trended for more revising in concert with the budget flattening. I provide an example for a single study section here ... you will notice if you review a series of closely related study sections is that the relative "preference" for giving high scores to -01, A1 and A2 applications varies somewhat between sections. This is analysis is perhaps unsurprising but we should be very clear that this does not reflect some change in the merit or value of revising applications; this is putting good applications in a holding pattern.

In the mean time, we've seen the NIH first limit revisions to 1 (the A1 version) for a few years to try to get grants funded sooner, counting from the date of first submission. In other words, to try to get more grants funded un-Amended, colloquially at the -A0 stage. After an initial trumpeting of their "success" the NIH went to silent running on this topic during a sustained drumbeat of complaints from applicants who, apparently, were math challenged and imagined that bringing back the A2 would somehow improve their chances. Then last year the NIH backed down and permitted applicants to keep submitting the same research proposal over and over, although after A1 the clock had to be reset to define the proposal as a "new" or A0 status proposal.

I have asserted all along that this is a shell game. When we were only permitted to submit one amended version, allegedly the same topic could not come back for review in "new" guise. But guess what? It took almost zero imagination to re-configure the Aims and the proposal such that the same approximate research project could be re-submitted for consideration. That's sure as hell what I did, and never ever got one turned back for similarity to a prior A1 application. The return to endless re-submission just allowed the unimaginative in on the game is all.

Type1-2000-2013 graph-2
This brings me around to a recent post over at Datahound. He's updated the NIH-wide stats for A0, A1 and (historically) A2 grants expressed as the proportion of all funded grants across recent years. As you can see, the single study section I collected the data for before both exaggerated and preceded the NIH-wide trends. It was as section that was (apparently) particularly bad about not funding proposals on the first submission. This may have given me a very severe bias..as you may recall, this particular study section was one that I submitted to most frequently in my formative years as a new PI.

It was clearly, however, the proverbial canary in the coalmine.

The new Datahound analysis shows another key thing which is that the traffic-holding, wait-your-turn behavior re-emerged in the wake of the A2 ban, as I had assumed it would. The triumphant data depictions from the NIH up through the 2010 Fiscal Year didn't last and of course those data were generated when substantial numbers of A2s were still in the system. The graph also shows taht there was a very peculiar worsening from 2012-2013 whereby the A0 apps were further disadvantaged, once again, relative to A1 apps which returns us right back to the trends of 2003-2007. Obviously the 2012-2013 interval was precisely when the final A2s had cleared the system. It will be interesting to see if this trend continues even in the face of the endless resubmission of A2asA0 era.

So it looks very much as though even major changes in permissible applicant behavior with respect to revising grants does very little. The tendency of study sections to put grants into a holding pattern and insist on revisions to what are very excellent original proposals has not been broken.

I return to my 2008 proposal for a way to address this problem:


So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a "desired" funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding
The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1 / 15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review....a great deal of time and effort would be saved.

11 responses so far

A simple plea for SfN 2014 attendees, particularly of the older, maler demographic

(by drugmonkey) Nov 13 2014

Do NOT creep on junior female scientists.

Do NOT creep on female scientists.

Do NOT creep on ANYBODY at the Annual Meeting.

(Getting drunk is not an excuse, btw.)

Don't so much as say anything creepy on your Facebook or Twitter or out loud where anyone can hear you.

Let everyone get as much science out of the Meeting as they can without having to worry about what your nasty self is up to, eh?

23 responses so far

SFN 2014: Minisyposium on Bath Salts and Fake Weed / Spice

(by drugmonkey) Nov 13 2014

There will be a minisymposium on synthetic drugs at the upcoming Annual Meeting of the Society for Neuroscience in Washington DC. You can find it on Tuesday, Nov 18, 2014, 1:30 PM - 4:00 PM in WCC Ballroom B.

571.Bath Salts, Spice, and Related Designer Drugs: The Science Behind the Headlines

Michael Baumann and Jenny Wiley have organized it, appropriately given their respective expertise with cathinones and cannabinoids, respectively.

The abstract reads:

Recently there has been an alarming increase in the nonmedical use of novel psychoactive substances known as “designer drugs.” Synthetic cathinones and synthetic cannabinoids are two of the most widely abused classes of designer drugs. This minisymposium presents the most up-to-date information about the molecular sites of action, pharmacokinetics and metabolism, and in vivo neurobiology of synthetic cathinones and cannabinoids.

Looks to be a can't-miss session for those of you who are interested in these drug classes.

One response so far

Are you presenting data at SFN 2014?

(by drugmonkey) Nov 11 2014

I'll extend my usual no-promises offer.

If you either drop your presentation details in the comments here or email me (drugmnky at the google mail) I'll try to work it into my schedule.

If it is really cool (and I can understand it) I might even blog it.

Hope to catch up with old blog friends and meet a few new folks.

See y'all at BANTER.

13 responses so far

Careful with your manuscript edits, folks

(by drugmonkey) Nov 11 2014

via Twitter and retractionwatch, some hilarity that ended up in the published version of a paper*.

Although association preferences documented in our study theoretically could be a consequence of either mating or shoaling preferences in the different female groups investigated (should we cite the crappy Gabor paper here?), shoaling preferences are unlikely drivers of the documented patterns both because of evidence from previous research and inconsistencies with a priori predictions.

Careful what sorts of editorial manuscript comments you let slip through, people.

__
*apparently the authors are trying to correct the record so it may not last in the official version at the journal.

9 responses so far

How would we know when science trainees bail on the career?

(by drugmonkey) Nov 10 2014

We all know about the oversupply problem in academic science wherein we are minting new PhDs faster than we create / open faculty jobs to house them.

Opinions vary on what is the proper ratio. All the way from "every PHD that wants a hard-money traditional Professorial job should get one" to "who cares if it is 1% or even 0.001%, we're getting the best through extreme competition!"

(I fall in between there. Somewhere.)

How would we detect it, however, if we've made things so dismal that too many trainees are exiting before even competing for a job?

One way might be if a job opportunity received very few qualified applicants.

Another way might be if a flurry of postdoctoral solicitations in your subfield appeared. I think that harder flogging of the advert space suggests a decline in filling slots via the usual non-public recruiting mechanisms.

I am hearing / seeing both of these things.

68 responses so far

Older posts »