Archive for the 'Science Publication' category

H-index

I can't think of a time when seeing someone's h-index created a discordant view of their impact. Or for that matter when reviewing someones annual cites was surprising.

I just think the Gestalt impression you generate about a scientist is going to correlate with most quantification measures.

Unless there are weird outliers I suppose. But is there is something peculiar about a given scientist's publications that skews one particular measure of awesomeness....wouldn't someone being presented that measure discount accordingly?

Like if a h-index was boosted by a host of middle author contributions to a much more highly cited domain than the one most people associate you with? That sort of thing.

9 responses so far

Authorship decisions

Deciding who should and should not be on the author line of a science publication is not as simple as it seems. As we know, citations matter, publications matter and there are all sorts of implications for authorship of a science publication.

A question about this arose on the Twitts:

Of course, we start from a very basic concept. Authorship of a scientific paper is deserved when someone has made a significant contribution to that paper. I can't distill it down any more than that. Nice and clean.

The trouble comes in when we consider the words significant and contribution.

This is where people disagree.

I also rely on another basic concept which is that someone should try to match, to a large extent, the practices within the subfields from which similar work is published. This can mean the journal itself, the scientific sub-domain or the institution type from which the paper is being submitted.

On to the specifics of this case.

First, do note that I understand that not everyone is in the position to wield ultimate authority when it comes to these matters. @forensictoxguy appears to be able to decide so we'll take it from that perspective. I will mention, however, that even if you are not the deciderer for your papers, you can certainly have an opinion and advocate this opinion with the person in charge of the decision making.

My first observation is that there is nothing wrong with single-author papers. They might be rare these days but they do occur. So don't be afraid to offer up a single-author paper now and again.

With that said, we now move on to the fact that the author line is a communication. Whether you are trying to convey a message about yourself as a scientist or not, your CV tells a story about you. And everything on there has potential implications for some audiences.

ethical, schmethical. Again, you don't throw someone on a paper "just because", you do it because they made a contribution. A contribution that you, as the primary/communicating/deciderering author, get to determine and evaluate. It is not impossible that these other people referred to in the Tweet made, or will make, a contribution. It could be via setting the environment (physical resources, administrative requirements, funding, etc), training the author or it could be through direct assistance with crafting the manuscript after all the work has been done. All of these are valid as domains for significant contribution.

This scenario of a private industry research lab appears, from the tweets, to be one where the colleagues and higher-ups are not intimately involved in pushing paper submissions. It appears to be a case where the author in question is deciding whether or not to even bother publishing papers. Therefore, the politics of ignoring more-senior folks (if they exist) is unfamiliar. I can't do much but read through the Tweet lines and assume this person is not risking annoying someone who is their boss. Obviously if someone in a boss-like status would be miffed, it is in your interest to find some way that they can make a contribution that is significant in your own understanding or to have a bloody discussion about it at the very least.

Leaving off the local politics, we can turn to the implications for your CV and the story of you as a scientist that it is going to tell.

If all you ever have are first-author publications it will look, to the modern eye, like you are non-collaborative, meaning not a team player. This is probably an impression you would like to avoid, yes, even within an industrial setting. But this is easy to minimize. I can't set any hard and fast rules but if you have some solo-author and some multiple-author pubs sprinkled throughout your timeline, I can't see this being a big deal. Particularly if your employment particulars do not demand a lot of pubs and, see above, the other people around you are not publishing. Eventually it would become clear that you are the one pushing publication so it isn't weird to see solo-author works.

Consider, however, that you are possibly losing the opportunity to burnish your credentials. The current academic science arc has an expectation for first-author papers as a trainee (grad student, postdoc) which is then supposed to transition to last-author pubs as a scientific supervisory person (aka professor or PI). Industry, I surmise, can have a similar path whereby you start out as some sort of lowly Scientist and then transition to a Manager where you are supervising a team.

In both of these scenarios, academic and industry, looking like you are a team-organizing, synthetic force is good. Adding more authors can be helpful in creating this impression. Looking like you are the driving intellectual participant on a sub-area of science is good. This concern looks like it votes for thinning your authorship lines- after all, someone else in your group might start to leech credit away from you if they appear consistently or in a position (read: last author, co-contributing author) that implies they are more of the unifying intellectual driver.

This is where you need to actually think about your situation.

I tell trainees who are worried about being hosed out of that one deserved first-author position or being forced to accept a co-contributing second author this
; You are in for the long haul. If you are publishing multiple papers in this area of science (and you should be) then for the most part you will have first-authors and in the end analysis it will be clear that you are the consistent and most important participant. It will be a simple matter for your CV to communicate that you are the ONE. So it may not be worth sweating the small stuff on each contentious author issue.

In a related vein, it costs you little to be generous, particularly with middle authors that have next to no impact on your credit for this work.

If you only plan to publish one paper, obviously this changes the calculation.

Do you ever plan to make a push for management? Whether of the academic PI or industry variety, I think it is useful to lay down a record of being the leader of the team. That can mean being communicating author or being last author. At some point, even in industry, an ambitious scientist may wish to start being last author even under the above-mentioned scenario.

This is what brand new PIs have to do. Find someone, anyone to be the first author on pubs so that they can be the last author. This is absolutely necessary for the CV as a communication device. Undergrad volunteer? Rotation student? Summer intern? No problem, they can be the first author right? Their level of contribution is not really the issue. I can see an industry scientist that wants to start making a push for management doing something similar to this.

As always, I return to the concept that you have to do your own research within your own situation to figure out what the expectations are. Look at what most people like yourself, in your situation, tend to do. That's your starting point. Then think about how your CV is going to look to people over the medium and long term. And make your authorship decisions accordingly.

91 responses so far

Supplementary Materials practices rob authors of citation credit

This is all the fault of qaz. And long time reader Nat had a blog post on this ages ago.

First, I shouldn't have to remind you all that much about a simple fact of nature in the academic crediting system. Citations matter. Our quality and status as academic scientists will be judged, in small or in large ways, by the citations that our own publications garner.

This is not to say the interpretation of citations is all the same because it most assuredly is not. Citation counting leads to all sorts of distilled measures across your career arc- Highly Cited and the h-index are two examples. Citation counting can be used to judge the quality of your individual paper as well- from the total number of cites, to the sustained citing across the years to the impressive-ness of the journals in which your paper has been cited.

Various stakeholders may disagree over which measure of citation of your work is most critical.

On one thing everyone agrees.

Citations matter.

One problem (out of many) with the "Supplementary Materials", that are now very close to required at some journals and heavily encouraged at others, is that they are ignored by the ISI's Web of Science indexing and, so far as I can tell, Google Scholar.

So, by engaging in this perverted system by which journals are themselves competing with each other, you* are robbing your colleagues of their proper due.

Nat observed that you might actually do this intentionally, if you are a jerk.

So now, not only can supplementary info be used as a dumping ground for your inconclusive or crappy data, but you can also stick references to your competitors in there and shaft them their citations.

Try not to be a jerk. Resist this Supplementary Materials nonsense. Science will be the better for it.

__
*yes, this includes me. I just checked some Supplementary citations that we've published to see if either ISI or Google Scholar indexes them- they do not.

25 responses so far

Thought of the day

Dec 05 2014 Published by under Replication, ReplicationCrisis, Science Publication

One thing that always cracks me up about manuscript review is the pose struck* by some reviewers that we cannot possibly interpret data or studies that are not perfect.

There is a certain type of reviewer that takes the stance* that we cannot in any way compare treatment conditions if there is anything about the study that violates some sort of perfect, Experimental Design 101 framing even if there is no reason whatsoever to suspect a contaminating variable. Even if, and this is more hilarious, if there are reasons in the data themselves to think that there is no effect of some nuisance variable.

I'm just always thinking....

The very essence of real science is comparing data across different studies, papers, paradigms, laboratories, etc and trying to come up with a coherent picture of what might be a fairly invariant truth about the system under investigation.

If the studies that you wish to compare are in the same paper, sure, you'd prefer to see less in the way of nuisance variation than you expect when making cross-paper comparisons. I get that. But still....some people.

Note: this is some way relates to the alleged "replication crisis" of science.
__
*having nothing to go on but their willingness to act like the manuscript is entirely uninterpretable and therefore unpublishable, I have to assume that some of them actually mean it. Otherwise they would just say "it would be better if...". right?

8 responses so far

Careful with your manuscript edits, folks

via Twitter and retractionwatch, some hilarity that ended up in the published version of a paper*.

Although association preferences documented in our study theoretically could be a consequence of either mating or shoaling preferences in the different female groups investigated (should we cite the crappy Gabor paper here?), shoaling preferences are unlikely drivers of the documented patterns both because of evidence from previous research and inconsistencies with a priori predictions.

Careful what sorts of editorial manuscript comments you let slip through, people.

__
*apparently the authors are trying to correct the record so it may not last in the official version at the journal.

9 responses so far

Small happy moments

I love it when the reviewers really GET the paper, what we are trying to do and why it is important!

10 responses so far

Once again, it is not pre-publication peer review that is your problem

The perennial discussion arose on the Twitts yesterday, probably sparked by renewed ranting from dear old @mbeisen.

For the uninitiated, a brief review of the components that go into the pre-publication approval of a scientific manuscript. In general, authors select a journal and submit a manuscript that they feel is ready for publication*. At this point an Editor (usually there is one Editor in Chief and a handful or so of sub-Editors usually referred to as Associate Editors; AEs) gives it a look and decides whether to 1) accept it immediately, 2) reject it immediately or 3) send it to peer scientists for review. The first option is exceptionally rare. The second option depends on both the obvious "fit" of the manuscript for the journal in terms of topic (e.g., "Mathematical Proof of Newton's Fifth and Sixth Laws" submitted to "Journal of Trangender Sociology") and some more-rarified considerations not immediately obvious to the home viewer.
Continue Reading »

22 responses so far

Datahound on productivity

This final figure from Datahound's post on K99/R00 recipients who have managed to win R01 funding is fascinating to me. This is a plot of individual investigators, matching their number of published papers against a weighted sum of publication. The weighting is for the number of authors on each paper as follows: "One way to correct for the influence of an increased number of authors on the number of publications is to weight each publication by 1/(number of authors) (as was suggested by a comment on Twitter). In this scenario, a paper with two authors would be worth 1/2 while a paper with 10 authors would be worth 1/10."

Doing this adjustment to the unadjusted authors/papers relationship tightens up the relationship from a correlation coefficient of 0.47 to 0.83.

Ultimately this post shows pretty emphatically that when you operate in a subfield or niche or laboratory that tends to publish papers with a lot of authors, you get more author credits. This even survives the diluting effect of dividing each paper by the number of authors on it. There are undoubtedly many implications.

I think the relationship tends to argue that increasing the author number is not a reflection of the so-called courtesy or guest authorships that seem to bother a lot of a people in science. If you get more papers produced, even when you divide by the number of authors on each paper, then this tends to suggest that authors are contributing additional science. The scatter plots even seem to show a fairly linear relationship so we can't argue that it tails off after some arbitrary cutoff of author numbers.

Another implication is for the purely personal. If we can generate more plots like this one across subfields or across PI characteristics (there may be something odd about the K99/R00 population of investigators for example), there may be a productivity line against which to compare ourselves. Do we (or the candidate) have more or fewer publications than would be predicted from the average number of authors? Does this suggest that you can identify slackers from larger labs (that happen to have a lot of pubs) and hard chargers from smaller labs (that have fewer total pubs, but excel against the expected value)?

15 responses so far

Something is funny at Science Magazine

Since many of you are AAAS members, as am I, I think you might be interested in an open letter blogged by Michael Balter, who identifies himself as "a Contributing Correspondent for Science and Adjunct Professor of Journalism at New York University".

I have been writing continuously for Science for the past 24 years. I have been on the masthead of the journal for the past 21 years, serving in a variety of capacities ranging from staff writer to Contributing Correspondent (my current title.) I also spent 10 years as Science’s de facto Paris bureau chief. Thus it is particularly painful and sad for me to tell you that I will be taking a three-month leave of absence in protest of recent events at Science and within its publishing organization, the American Association for the Advancement of Science (AAAS).

sounds serious.

What's up?

Yet in the case of the four women dismissed last month, no such explanation was made, nor even a formal announcement that they were gone. Instead, on September 25, Covey wrote a short email to Science staff telling us who the new contacts were for magazine makeup and magazine layout. No mention whatsoever was made of our terminated colleagues. As one fellow colleague expressed it to me: “Brr.”

Four staff dismissals that he blames on a newcomer to the organization.


I think that this collegial atmosphere continued to dominate until earlier this year, when the changes that we are currently living through began in earnest. Rob Covey came on board at AAAS in September 2013, and at first many of us thought that he was serving mostly in an advisory capacity; after all, he had a reputation for helping media outlets achieve their design and digital goals, a role he had played at National Geographic, Discovery Communications, and elsewhere. I count myself among those who were happy about many of the changes he brought about, including the redesign of the magazine, the ramping up of our multimedia presence, etc. But somewhere along the way Covey began to take on more power and more authority for personnel decisions, an evolution that has generated increasing consternation among the staff in all of Science’s departments.

New broom sweeps?

(In addition, according to all the information I have been able to gather about it, Covey was responsible for one of the most embarrassing recent episodes at Science, the July 11, 2014 cover of the special AIDS issue. This cover, for which Science has been widely excoriated, featured the bare legs [and no faces] of transgender sex workers in Jakarta, which many saw as a crass objectification and exploitation of these vulnerable individuals. Marcia McNutt was forced to publicly apologize for this cover, although she partly defended it as the result of “discussion by a large group.” In fact, my understanding, based on sources I consider reliable, is that a number of members of Science’s staff urged Covey not to use the cover, to no avail.)

Remember this little oopsie?

This will be interesting to watch, particularly if we hear more about the July 11 cover and any possible role that the individuals Balter references in this statement, "The recent dismissal of four women in our art and production departments", had in the opposition or approval argument.

4 responses so far

SfN's new eNeuro journal will attempt double blind peer review

From the author guidelines:

eNeuro uses a double-blind review process, which means the identities of both the authors and reviewers are concealed throughout the review process. In order to facilitate this, authors must ensure their manuscripts are prepared in a way that does not reveal their identity.

And how do they plan to accomplish this feat?

Eliminate author names and contact information from anyplace in the paper. See Title page for more information.

Make sure to use the third person to refer to personal work e.g. replace any phrases like 'as we have shown before' with 'has been shown before (Anonymous, 2007)'

Make sure that the materials and methods section does not refer to personal work. Do not include statements such as “using the method described in (XXX, 2007).” See Materials and Methods for more information.

Ensure that figures do not contain any affiliation-related identifier.

Depersonalize the work by using anonymous text where necessary. Do not include statements such as “as we have reported before”.

Remove self-citations and citations to unpublished work.

Do not eliminate essential self-references or other references, but limit self-references only to papers that are relevant for those reviewing the submitted paper.

Remove references to funding sources

I will be fascinated to see what procedures they have in place to determine if the blinding is actually working.

Will reviewers asked for their top five guesses as to the identity of the group submitting the manuscript do better than chance?

Will identification depend on the fame and status (and productivity) of the group submitting the paper?

Will it correlate with relatedness of scientific expertise?

What fraction of authors are identified all the time versus never?

Somehow, I suspect the staff of eNeuro will not really be interested in testing their assumptions.

23 responses so far

Older posts »