Archive for the 'Scientific Publication' category

H-index

I can't think of a time when seeing someone's h-index created a discordant view of their impact. Or for that matter when reviewing someones annual cites was surprising.

I just think the Gestalt impression you generate about a scientist is going to correlate with most quantification measures.

Unless there are weird outliers I suppose. But is there is something peculiar about a given scientist's publications that skews one particular measure of awesomeness....wouldn't someone being presented that measure discount accordingly?

Like if a h-index was boosted by a host of middle author contributions to a much more highly cited domain than the one most people associate you with? That sort of thing.

2 responses so far

Supplementary Materials practices rob authors of citation credit

This is all the fault of qaz. And long time reader Nat had a blog post on this ages ago.

First, I shouldn't have to remind you all that much about a simple fact of nature in the academic crediting system. Citations matter. Our quality and status as academic scientists will be judged, in small or in large ways, by the citations that our own publications garner.

This is not to say the interpretation of citations is all the same because it most assuredly is not. Citation counting leads to all sorts of distilled measures across your career arc- Highly Cited and the h-index are two examples. Citation counting can be used to judge the quality of your individual paper as well- from the total number of cites, to the sustained citing across the years to the impressive-ness of the journals in which your paper has been cited.

Various stakeholders may disagree over which measure of citation of your work is most critical.

On one thing everyone agrees.

Citations matter.

One problem (out of many) with the "Supplementary Materials", that are now very close to required at some journals and heavily encouraged at others, is that they are ignored by the ISI's Web of Science indexing and, so far as I can tell, Google Scholar.

So, by engaging in this perverted system by which journals are themselves competing with each other, you* are robbing your colleagues of their proper due.

Nat observed that you might actually do this intentionally, if you are a jerk.

So now, not only can supplementary info be used as a dumping ground for your inconclusive or crappy data, but you can also stick references to your competitors in there and shaft them their citations.

Try not to be a jerk. Resist this Supplementary Materials nonsense. Science will be the better for it.

__
*yes, this includes me. I just checked some Supplementary citations that we've published to see if either ISI or Google Scholar indexes them- they do not.

25 responses so far

The "whole point" of Supplementary Data

Dec 10 2014 Published by under Impact Factor, Scientific Publication

Our good blog friend DJMH offered up the following on a post by Odyssey:
Because the whole point of supplemental material is that the publisher doesn't want to spend a dime supporting it

This is nonsense. This is not "the whole point". This is peripheral to the real point.

In point of fact, the real reason GlamourMags demand endless amounts of supplementary data is to squeeze out the competition journals. They do this by denying those other journals the data that would otherwise be offered up as additional publications. Don't believe it? Take a look through some issues of Science and Nature from the late 1960s through maybe the mid 1970s. The research publications were barely Brief Communications. A single figure, maybe two. And no associated "Supplemental Materials", either. And then, if you are clever, you will find the real paper that was subsequently published in a totally different journal. A real journal. With all of the meat of the study that was promised by the teaser in the Glam Mag fleshed out.

Glamour wised up and figured out that with the "Supplementary Materials" scam they can lock up the data that used to be put in another journal. This has the effect of both damping citations of that specific material and collecting what citations there are to themselves. All without having to treble or quadruple the size of their print journal.

Nice little scam to increase their Journal Impact Factor distance from the competition.

19 responses so far

Careful with your manuscript edits, folks

via Twitter and retractionwatch, some hilarity that ended up in the published version of a paper*.

Although association preferences documented in our study theoretically could be a consequence of either mating or shoaling preferences in the different female groups investigated (should we cite the crappy Gabor paper here?), shoaling preferences are unlikely drivers of the documented patterns both because of evidence from previous research and inconsistencies with a priori predictions.

Careful what sorts of editorial manuscript comments you let slip through, people.

__
*apparently the authors are trying to correct the record so it may not last in the official version at the journal.

9 responses so far

Small happy moments

I love it when the reviewers really GET the paper, what we are trying to do and why it is important!

10 responses so far

Once again, it is not pre-publication peer review that is your problem

The perennial discussion arose on the Twitts yesterday, probably sparked by renewed ranting from dear old @mbeisen.

For the uninitiated, a brief review of the components that go into the pre-publication approval of a scientific manuscript. In general, authors select a journal and submit a manuscript that they feel is ready for publication*. At this point an Editor (usually there is one Editor in Chief and a handful or so of sub-Editors usually referred to as Associate Editors; AEs) gives it a look and decides whether to 1) accept it immediately, 2) reject it immediately or 3) send it to peer scientists for review. The first option is exceptionally rare. The second option depends on both the obvious "fit" of the manuscript for the journal in terms of topic (e.g., "Mathematical Proof of Newton's Fifth and Sixth Laws" submitted to "Journal of Trangender Sociology") and some more-rarified considerations not immediately obvious to the home viewer.
Continue Reading »

22 responses so far

Datahound on productivity

This final figure from Datahound's post on K99/R00 recipients who have managed to win R01 funding is fascinating to me. This is a plot of individual investigators, matching their number of published papers against a weighted sum of publication. The weighting is for the number of authors on each paper as follows: "One way to correct for the influence of an increased number of authors on the number of publications is to weight each publication by 1/(number of authors) (as was suggested by a comment on Twitter). In this scenario, a paper with two authors would be worth 1/2 while a paper with 10 authors would be worth 1/10."

Doing this adjustment to the unadjusted authors/papers relationship tightens up the relationship from a correlation coefficient of 0.47 to 0.83.

Ultimately this post shows pretty emphatically that when you operate in a subfield or niche or laboratory that tends to publish papers with a lot of authors, you get more author credits. This even survives the diluting effect of dividing each paper by the number of authors on it. There are undoubtedly many implications.

I think the relationship tends to argue that increasing the author number is not a reflection of the so-called courtesy or guest authorships that seem to bother a lot of a people in science. If you get more papers produced, even when you divide by the number of authors on each paper, then this tends to suggest that authors are contributing additional science. The scatter plots even seem to show a fairly linear relationship so we can't argue that it tails off after some arbitrary cutoff of author numbers.

Another implication is for the purely personal. If we can generate more plots like this one across subfields or across PI characteristics (there may be something odd about the K99/R00 population of investigators for example), there may be a productivity line against which to compare ourselves. Do we (or the candidate) have more or fewer publications than would be predicted from the average number of authors? Does this suggest that you can identify slackers from larger labs (that happen to have a lot of pubs) and hard chargers from smaller labs (that have fewer total pubs, but excel against the expected value)?

15 responses so far

Pet Peeve: "the literature"

One precious tic of academic writing I implore you to avoid is "...in the literature".

"Novel contribution to the literature..."

"Unknown in the literature..."

"Fill a gap in the literature..."

You know what I mean.

The impression you create is that this is some silly self-referential game with only internal measures of importance.

Whether this is how you see science or not.....avoid creating this impression.

Talk about knowledge or understanding instead.

23 responses so far

SfN's new eNeuro journal will attempt double blind peer review

From the author guidelines:

eNeuro uses a double-blind review process, which means the identities of both the authors and reviewers are concealed throughout the review process. In order to facilitate this, authors must ensure their manuscripts are prepared in a way that does not reveal their identity.

And how do they plan to accomplish this feat?

Eliminate author names and contact information from anyplace in the paper. See Title page for more information.

Make sure to use the third person to refer to personal work e.g. replace any phrases like 'as we have shown before' with 'has been shown before (Anonymous, 2007)'

Make sure that the materials and methods section does not refer to personal work. Do not include statements such as “using the method described in (XXX, 2007).” See Materials and Methods for more information.

Ensure that figures do not contain any affiliation-related identifier.

Depersonalize the work by using anonymous text where necessary. Do not include statements such as “as we have reported before”.

Remove self-citations and citations to unpublished work.

Do not eliminate essential self-references or other references, but limit self-references only to papers that are relevant for those reviewing the submitted paper.

Remove references to funding sources

I will be fascinated to see what procedures they have in place to determine if the blinding is actually working.

Will reviewers asked for their top five guesses as to the identity of the group submitting the manuscript do better than chance?

Will identification depend on the fame and status (and productivity) of the group submitting the paper?

Will it correlate with relatedness of scientific expertise?

What fraction of authors are identified all the time versus never?

Somehow, I suspect the staff of eNeuro will not really be interested in testing their assumptions.

23 responses so far

There is no "filter problem" in science

Seriously.

It is your job as a scientist to read the literature, keep abreast of findings of interest and integrate this knowledge with your own work.

We have amazing tools for doing so that were not available in times past, everything gets fantastically better all the time.

If you are a PI you even have minions to help you! And colleagues! And manuscripts and grants to review which catch you up.

So I ask you, people who spout off about the "filter" problem.....

What IS the nature of this problem? How does it affect your working day?

Since most of you deploy this in the context of wanting fewer papers to be published in fewer journals...how is that better? What is supposed to disappear from your view?

The stuff that you happen not to be interested in?

32 responses so far

Older posts »