Archive for the 'Science Communication' category

Why aren't they citing my papers?

As the Impact Factor discussion has been percolating along (Stephen Curry, Björn Brembs, YHN) it has touched briefly on the core valuation of a scientific paper: Citations!

Coincidentally, a couple of twitter remarks today also reinforced the idea that what we are all really after is other people who cite our work.

More people should cite my papers.

I totally agree. More people should cite my papers. Often.


was a bit discouraged when a few papers were pub'ed recently that conceivably could have cited mine

Yep. I've had that feeling on occasion and it stings. Especially early in the career when you have relatively few publications to your name, it can feel like you haven't really arrived yet until people are citing your work.

Before we get too far into this discussion, let us all pause and remember that all of the specifics of citation numbers, citation speed and citation practices are going to be very subfield dependent. Sometimes our best discussions are enhanced by dissecting these differences but let's try not to act like nobody recognizes this, even though I'm going to do so for the balance of the post....

So, why might you not be getting cited and what can you do about it? (in no particular order)

1) Time. I dealt with this in a prior post on gaming the impact factor by having a lengthy pre-publication queue. The fact of the matter is that it takes a long time for a study that is primarily motivated by your paper to reach publication. As in, several years of time. So be patient.

2) Time (b). As pointed out by Odyssey, sometimes a paper that just appeared reached final draft status 1, 2 or more years ago and the authors have been fighting the publication process ever since. Sure, occasionally they'll slip in a few new references when revising for yet the umpteenth time but this is limited.

3) Your paper doesn't hit the sweet spot. Speaking for myself, my citation practices lean this way for any given point I'm trying to make. The first, best and most recent. Rationale's vary and I would assume most of us can agree that the best, most comprehensive, most elegant and all around most scientifically awesome study is the primary citation. Opinions might vary on primacy but there is a profound sub-current that we must respect the first person to publish something. The most-recent is a nebulous concept because it is a moving target and might have little to do with scientific quality. But all else equal, the more recent citations should give the reader access to the front of the citation thread for the whole body of work. These three concerns are not etched in stone but they inform my citation practices substantially.

4) Journal identity. I don't need to belabor this but suffice it to say some people cite based on the journal identity. This includes Impact Factor, citing papers on the journal to which one is submitting, citing journals thought important to the field, etc. If you didn't happen to publish there but someone else did, you might be passed over.

5) Your paper actually sucks. Look, if you continually fail to get cited when you think you should have been mentioned, maybe your paper(s) just sucks. It is worth considering this. Not to contribute to Imposter Syndrome but if the field is telling you to up your game...up your game.

6) The other authors think your paper sucks (but it doesn't). Water off a duck's back, my friends. We all have our opinions about what makes for a good paper. What is interesting and what is not. That's just the way it goes sometimes. Keep publishing.

7) Nobody knows you, your lab, etc. I know I talk about how anyone can find any paper in PubMed but we all need to remember this is a social business. Scientists cite people they know well, people they've just been chatting with at a poster session and people who have just visited for Departmental seminar. Your work is going to be cited more by people for whom you/it/your lab are most salient. Obviously, you can do something about this factor...get more visible!

8) Shenanigans (a): Sometimes the findings in your paper are, shall we say, inconvenient to the story the authors wish to tell about their data. Either they find it hard to fit it in (even though it is obvious to you) or they realize it compromises the story they wish to advance. Obviously this spans the spectrum from essentially benign to active misrepresentation. Can you really tell which it is? Worth getting angsty about? Rarely.....

9) Shenanigans (b): Sometimes people are motivated to screw you or your lab in some way. They may feel in competition with you and, nothing personal but they don't want to extend any more credit to you than they have to. It happens, it is real. If you cite someone, then the person reading your paper might cite them. If you don't, hey, maybe that person will miss it. Over time, this all contributes to reputation. Other times, you may be on the butt end of disagreements that took place years before. Maybe two people trained in a lab together 30 years ago and still hate each other. Maybe someone scooped someone back in the 80s. Maybe they perceived that a recent paper from your laboratory should have cited them and this is payback time.

10) Nobody knows you, your lab, etc II, electric boogaloo. Cite your own papers. Liberally. The natural way papers come to the attention of the right people is by pulling the threads. Read one paper and then collect all the cited works of interest. Read them and collect the works cited in that paper. Repeat. This is the essence of graduate school if you ask me. And it is a staple behavior of any decent scientist. You pull the threads. So consequently, you need to include all the thread-ends in as many of your own papers as possible. If you don't, why should anyone else? Who else is most motivated to cite your work? Who is most likely to be working on related studies? And if you can't find a place for a citation....

16 responses so far

Reviewing academic papers: Observation

When you are reviewing papers for a journal, it is in your best interest to stake out papers most like your own as "acceptable for publication".

If it is a higher IF than you usually reach, you should argue for a manuscript that is somewhat below that journal's standard.

If it is a journal in which you have published, it is in your interest to crap on any manuscript that is lesser than your typical offerings.

16 responses so far

Authors fail to illuminate the LPU issue

We most recently took up the issue of the Least Publishable Unit of science in the wake of a discussion about first authorships (although I've been talking about it on blog for some time). In that context, the benefit of having more, rather than fewer, papers emerging from a given laboratory group is that individual trainees have more chance of getting a first-author slot. Or they get more of them. This is highly important in a world where the first-author publications on the CV loom so large. Huge in fact.

I've also alluded to the fact that LPU tendencies are a benefit to the conduct of science (as a group enterprise) because it allows the faster communication of results, the inclusion of more methodological detail (critical for replication and extension) and potentially the inclusion of more negative outcomes (which saves the group time).

I have also staked my claim that in an era when most of us find, sort and organize literature with search engine tools from our desktop computers, the "costs" of the LPU approach are minimal.

The recent APS Observer reprinted a column in the NYT that I'd originally missed entitled "The Perils of 'Bite sized' Science" (MARCO BERTAMINI and MARCUS R. MUNAFÒ; Published: January 28, 2012 ). Woot! No offense, commentariat, but you've done a dismal job so far of making an argument for why the LPU approach is so bad or detrimental to the conduct of science, particularly in response to my reasons. So I was really stoked to see this, in hopes of gaining some insight. I was sadly disappointed. Continue Reading »

12 responses so far

Who will shelter the "shitasse" society journals?

Jan 27 2012 Published by under Science Communication, Science Publication

In the previous post on journal publishing, I observed that sub-sub-specialty journals were an anachronism of the era prior to the establishment of nearly comprehensive search engines and databases like PubMed. In that era, dividing the monthly output of scientific papers into journals made sense. First of all, it would be pretty hard to pick up a monthly issue of "The Omnibus Journal of Biomedical Science". Second, it would be unduly laborious (and paper cut-y) to keep flipping around from some index or TOC to the abstracts you wanted to scan. So there were certain physical realities driving journal specialization.

Not to mention the fact that across the decades from 1886 to 1996 (PubMed established) there was a gradual and sustained addition of sub-specialty societies, narrower and narrower subfields of interest and an all around expansion of academic science. This came with a desire for yet another group of scientists to have selection of the studies they most wanted to read into a smaller number of journals.

I am not privy to all the details of the history of journals in academic science. Not even close. But what I do know is that a publisher such as Elsevier has a metric boatload of small sub-specialty journals at present time. Many of which are tied to an academic society. They continue to launch NEW ones. (Phew, I'm already link exhausted- Google "Official Journal Elsevier" and see what you get. The list is enormous.)

It is, or has been, in the interests of both Elsevier and the academic society to continue this arrangement. Occasionally societies will switch publishers. For example, Neuropsychopharmacology jumped from Elsevier to Nature Publishing Group in recent memory**. Occasionally you'll be looking at the online site for a journal and notice a truncation in the archive..and have to Google around to figure out who used to publish the journal. Nevertheless, it is clear that Elsevier thinks these arrangements are good ones. Presumably because they get good return from libraries when they bundle a bunch of journals into a fixed price menu.

[Sidebar: This is a bit of a fly in the ointment, btw. One thing I do laud the publishers for is when they've taken effort to PDF all of their back catalog...back to vol 1, issue 1 in the dark ages in some cases. When there's a shift in the publisher that took place prior to the online age it seems to me that their motivation for putting up a back catalog for a journal they no longer publish is not very high.]

What do the societies get in return?

I am, shall we say*, somewhat informed about moves by at least two society level journals to switch their default member subscription from print to online. The response seemed to be overwhelming approval and lack of opting-for-print amongst the memberships. No surprise, almost all of us are complete and total converts to the benefits of online access to journal articles and personal PDF archives on our computers. Yes, even the rapidly emeritizing cohort. Still, it is nice to see the data, so to speak. Nice to see that if a society stops sending print issues to clog up faculty bookshelves collecting dust, nobody objects.

But......ego. Somehow I bet the existing societies would get their backs up a little bit if there was a suggestion that they simply give up their journal. Neuropolarbear asked what could be done about the assy position being taken by some publishers on the Research Works Act issue. This is the one trying to reverse the law demanding the deposit of all NIH funded papers into PubMedCentral (in peer-reviewed, accepted, manuscript form).

One thing we could do is to demand our society journals stop working with the jerky publishers.

This thought is what brought up all the above blathering. It is very likely that each and every small journal couldn't make it on their own. Well, duh, of course not. As noted by the irrepressible Comrade Physioproffe

From what I understand, the other issue moneywise is that big publishers like Elsevier force institutions to pay subscription fees for shitteasse journals that no one reads by bundling them with their flagship journals. Those journals wouldn’t even exist if they had to survive on their own submission/publication fees.

But if all we're talking about is a sort of virtual journal...why can't some other umbrella journal publisher just kind of take up the slack? Why couldn't a PLoS ONE type of outfit agree to provide all the publishing services and put some sort of tag on the article to group by academic society?

*christ that was priggish, wasn't it?

**fascinating. In the case of Neuropsychopharmacology, the entire back catalog was transferred over to NPG so if you click on an article that your print copy insists was published by Elsevier, boom, you end up at NPG.

24 responses so far

Call Your Congress Critter: The Research Works Act

Websearch your CongressCritter and navigate to the email / reply form. Then give him or her an earful (eyeful) about the attempt by Reps Maloney and Issa to discontinue the requirement for public funded science to be made publicly available (by the Omnibus Appropriation passed in Mar 2009).

Please. Put your Critter on alert that this is bad legislation that is bad for taxpayers. Additional detail is after the jump. Continue Reading »

9 responses so far

Authoritarians always confuse credentials with expertise. Take Eleven.

I am greatly enjoying reading this measured takedown

For example, the article on states, “Grad students often co-author scientific papers to help with the laborious task of writing. Such papers are rarely the cornerstone for trillions of dollars worth of government climate funding, however — nor do they win Nobel Peace prizes.” I will assume that the bit about “Nobel Peace prizes” was a mistake made by the Fox News writer, since as I’m sure you’re aware, scientific achievements do not lead to Peace prizes. Further, most science of any kind doesn’t lead to a Nobel Prize. They really don’t hand out that many of them.

But let’s de-construct this one a little more. Grad students often are the lead author on scientific publications, because they carried out the work. I know you feel that this shouldn’t be the case. How can they do science without a Ph.D?! Well, it turns out that’s how you get a Ph.D. By doing research that leads to publications.

of this variety of ignorant mewling about the conduct of science.

“We’ve been told for the past two decades that 'the Climate Bible' was written by the world’s foremost experts,” Canadian journalist Donna Laframboise told “But the fact is, you are just not qualified without a doctorate. In academia you aren't even on the radar at that point.”

In academia, the people who are "on the radar" for any given topic are those who are most directly and deeply involved in the work. Sometimes that breadth and depth comes from a longer career in the field. Sometimes it comes because as a grad student you have done nothing else other than focus exclusively, think deeply and read exhaustively on a given topic. Ultimately, those who should be listened to most are those that know the most.

Academic credentials can be the marker, but are no substitute, for expertise.

4 responses so far


May 26 2011 Published by under Science Communication

How often do you cite a paper for the overall, Gestalt thrust of the story? For the whole picture?

How frequently do you cite a paper for only a figure or two out of the whole thing? Or for a method?

What does this tell you about the notion that there is such a thing as a meaningful standard of a "complete story"?

12 responses so far

On giving advice to newly transitioned Assistant Professors

Dr Becca has a post up in which she ponders a perennial issue for newly established labs....and many other labs as well.

The gist is that which journal you manage to get your work published in is absolutely a career concern. Absolutely. For any newcomers to the academic publishing game that stumbled on this post, suffice it to say that there are many journal ranking systems. These range from the formal to the generally-accepted to the highly personal. Scientists, being the people that they are, tend to take shortcuts when evaluating the quality of someone else's work, particularly once it ranges afield from the highly specific disciplines which the reviewing individual inhabits. One such shortcut is inferring something about the quality of a particular academic paper by knowledge of the reputation of the journal in which it is published.

One is also judged, however, by the rate at which one publishes and, correspondingly, the total number of publications given a particular career status.

Generally speaking there will be an inverse correlation between rate (or total number) and the status of the journals in which the manuscripts are published.

This is for many reasons, ranging from the fact that a higher-profile work is (generally) going to require more work. More time spent in the lab. More experiments. More analysis. More people's expertise. Also from the fact that the manuscript may need to be submitted to more higher-profile journals (in sequence, never simultaneously), on average, to get accepted then to get picked up by so-called lesser journals.

This negative correlation of profile/reputation with publishing rate is Dr Becca's issue of the day. When to keep bashing your head against the "high profile journal" wall and when to decide that the goal of "just getting it published" somewhere/anywhere* takes priority.

I am one who advises balance. The balance that says "don't bet the entire farm" on unknowables like GlamourMag acceptance. The balance that says to make sure a certain minimum publication rate is obtained. And for a newly transitioning scientist, I think that "at least one pub per year" needs to be the target. And I mean, per year, in print, pulled up in PubMed for that publishing year. Not an average, if you can help it. Not Epub in 2011, print in 2012. Again, if you can help it.

The target. This is not necessarily going to be sufficient...and in some cases a gap of a year or two can be okay. But I think this is a good general rubric for triaging your submission strategy.

It isn't that one C/N/S pub won't trump a sustained pub rate and a half-dozen society level publications. It will. The problem is that it is a far from certain outcome. So if you end up with a three year publication gap, no C/N/S pubs and you end up dumping the data in a half-dozen society level journal pubs anyway...well, in grant-getting and tenure-awarding terms, a 2-3 year publication gap with "yeah but NOW we're submitting this stuff to dump journals like wild fire so all, good, k?" just isn't smart.

My advice is to take care of business first, get that 1-2 pub per year in bare minimum or halfway decent journals track going, and then to think about layering high-profile risky business on top of that.

Dang, I got all distracted. What I really meant to blog about was a certain type of comment popping up in Dr. Becca's thread.

The kind of comment that I think pushes the commenter's pet agenda, vis a vis academic publishing, over what is actually good advice for someone that is newly transitioned to an independent laboratory position. I have my own issues when it comes to this stuff. I think the reification of IF and the pursuit of GlamorMag publication is absolutely ruining the pursuit of knowledge and academic science.

But it is absolutely foolish and bad mentoring to ignore the realities of our careers and the judging of our talents and accomplishments. I'd rather nobody *ever* submitted to journal solely because of the journal's reputation. I long for the end of each and every academic journal in which the editors are anything other than actual working scientists. The professional journal "editors" will be, as they say, the first against the wall come the revolution in my glorious future. Etc.

But you would never catch me telling someone in Dr. Becca's position that she should just ignore IF and journal status and publish everything in the easiest venue to get accepted. Never.

You wackaloon Open Access Nazdrul and followers need to dissociate your theology from your advice giving.
*there are minimum standards. "Peer Reviewed" is one such standard. I would argue that "indexed in PubMed" (or your relevant major database) is another such. Also, my arbitrary sub-field snobbery** starts at an Impact Factor of around 1.something.....however I notice that the IF of my touchstone journals for "the bottom" have inched up over the years. Perhaps "2" is my lower bound now.

**see? for some fields this is snobbery. for others, a ridiculous, snarky statement. Are you getting the message yet?

12 responses so far

Science Education Awards from NIAID

Dec 23 2010 Published by under Education, NIH, NIH funding, Science Communication

A recent funding opportunity announcement from the NIH Guide caught my eye. PAR-11-086 is for "NIAID Science Education Awards (R25)", the purpose of which is described as follows:

This funding opportunity announcement (FOA) encourages applications from organizations that focus on the development of science education for K-12 students. It is expected that these education programs will provide outreach to a large audience of students at a national level, directly or through their teachers, using approaches where successes can be measured.

Emphasis added. Despite the fact that these R25 mechanism awards have been used by the NIH for a long time and did not even remotely imagine the use of currently available new media and internet technologies, there is an obvious fit. For my audience. For those of you who already use blogs or even YouTube or Facebook, to disseminate scientific information.

Upside in this particular announcement includes the use of standard receipt dates for the application (I've seen some NIH ICs that use a once-per-year, nonstandard receipt dates so check your IC's announcements that use the R25 mechanism.) You may request up to $175,000 per year in direct costs and propose up to 5 years of support.

Are you listening yet, my friends?

There is one obvious trouble spot, since measuring the "success" (aka, any impact or influence on knowledge) of scientific blogging is not an easy task. Still, it isn't as though this is a novel requirement or goal for websites and similar Internet based resources. There already exist ways to try to measure impact. And as you know, sometimes in the NIH grant writing game all that you need to do is provide nominal cover for favorably-disposed reviewers. You know those annoying polls that pop up on websites now and then? I seem to notice them at NIH websites with some frequency. Of course, I just close them but if this is the accepted way to monitor web impact, easy-peasy. Those who have prior experience doing brief post-seminar "evaluation" surveys can probably whip something up in SurveyMonkey or PollDaddy in a trice.

This particular FOA is directed at the K-12 primary and secondary school age groups. I'll point out that not all FOAs that I've seen using the R25 mechanism are limited to this particular audience. So you may find something that fits better with an audience that is of most interest to you under another FOA.

Need ideas? Start with RePORTER to see what is currently funded by the NIH under the R25 mechanism (New Grants, Existing Grants).

2 responses so far

The SfN Neurobloggers for 2010 are...

The Society for Neuroscience has announced the bloggers which have been selected for official recognition and promotion during the 2010 Annual Meeting to be held in San Diego (Nov 13-17).

Theme A: Development
(Twitter @jsnsndr)
(Twitter @geneticexpns)

Theme B: Neural Excitability, Synapses, Glia: Cellular Mechanisms
(Twitter @hillaryjoy)

Theme C: Disorders of the Nervous System
(Twitter @houseofmind)

Theme D: Sensory and Motor Systems
(Twitter @Pascallisch)
(The Neuro Dilettante - Twitter @neurodilettante)
(Twitter @davederiso)

Theme E: Homeostatic and Neuroendocrine Systems
(Twitter @Beastlyvaulter)

Theme F: Cognition and Behavior
(Twitter @aechase)
(Twitter @stanfordneuro)

Theme H: History, Teaching, Public Awareness, and Societal Impacts in Neuroscience
(Twitter @thekhawaja)

7 responses so far

« Newer posts Older posts »