I think that at some point, protracted refusal to cite relevant work amounts to scientific misconduct.
Archive for the 'Science Publication' category
Who do you select when listing potential reviewers for your manuscripts?
I go for suggestions that I think will be favorably inclined toward acceptance. This may be primarily because they work on similar stuff (otherwise they aren't going to be engaged at all) but also because I think* they are favorable towards my laboratory.
(I have also taken to making sure I suggest at least 50% women but that is a different matter.)
I wouldn't suggest anyone that violates the clearest statement of automatic COI that pertains to me, i.e. the NIH grant review 3-year window of collaboration.
Where do you get your standards?
*I could always be wrong of course
When you "storyboard" the way a figure or figures for a scientific manuscript should look, or need to look, to make your point, you are on a very slippery slope.
It sets up a situation where you need the data to come out a particular way to fit the story you want to tell.
This leads to all kinds of bad shenanigans. From outright fakery to re-running experiments until you get it to look the way you want.
Story boarding is for telling fictional stories.
Science is for telling non-fiction stories.
These are created after the fact. After the data are collected. With no need for storyboarding the narrative in advance.
Honest scientists do not use "placeholder" images when creating manuscript figures. Period.
See this nonsense from Cell.
The life of the academic scientist includes responding to criticism of their ideas, experimental techniques and results, interpretations and theoretical orientations*.
This comes up pointedly and formally in the submission of manuscripts for potential publication and in the submission of grant applications for potential funding.
There is an original submission, a return of detailed critical comments and an opportunity to respond to those critiques with revisions to the manuscript / grant application and/or argumentative rebuttal.
As I have said repeatedly in this forum, one of my most formative scientific mentors told me that you should take each and every comment seriously. Consider what is being said, why it is being said and try to respond accordingly. This mentor told me that I would usually find that by considering even the most idiotic seeming comments seriously, the manuscript (or grant application) is improved.
I have found this to be a universal truth of my professional work.
My understanding of what I was told by my mentor, versus what I have filled in additionally in my similar comments to my own trainees is now very fuzzy. I cannot remember exactly how extensively this mentor stamped down what is now my current understanding. For example, it is helpful to me to consider that Reviewer #3 represents about 33% of peers instead of thinking of this person as the rare outlier. I think that one may be my own formulation. Regardless of the relative contributions of my mentor versus my lived experience, it is all REALLY valuable advice that I have internalized.
The paper and grant review process is not there, by any means, to prove to you beyond a shadow of a doubt** that the reviewer's position is correct and you are wrong. A reviewer that provides citations for a criticism is not by any means the majority of my experience...although you will see this occasionally. Even there, you could always engage cited statements from an antagonistic default setting. This is unwise.
The upshot of this critique-not-proof system means that as a professional, you have to be able to argue against yourself in proxy for the reviewer. This is why I say you need to consider each comment thoughtfully and try to imagine where it is coming from and what the person is really saying to you. Assume that they are acting in good faith instead of reflexively jumping behind paranoid suspicions that they are just out to get you for nefarious purposes.
This helps you to critically evaluate your own product.
Ultimately, you are the one that knows your product best, so you are the one in position to most thoroughly locate the flaws. In a lot of ways, nobody else can do that for you.
Professionalism demands that you do so.
*Not an exhaustive list.
**colloquially, they are leading you to water, not forcing you to drink.
The Introduction section of research articles should evolve with a maturing field.
In the early going, every new lab that jumps into an emerging sub-sub-field of investigation tends to review the same arguments from a limited set of evidence. As time elapses, review articles are written and the body of prior work pretty much covers the bases.
So Intros should become shorter and simply reference that prior material.
Particularly when the current work is as much motivated by ongoing findings as it is by the original problem or question which motivated the original sub-sub-field.
This can be a problem for latecomers who want to showcase all of the beautiful justification and background that they have been writing up. This is especially problematic for graduate students who have been writing at dissertation length and are loathe to kill-their-babies (as the real writers say).
It is generally problematic for trainees because they have so few of their sentences, paragraphs and pages published at this point of their career.
As a reviewer I try as best I can to be tolerant to these issues. You can generally tell when a long Intro section comes from dissertation writing so I try to be kind.
Likewise, even though I lean for a substantial Intro myself, if someone wrote sparely with clear reference to the motivating literature, well I try to resist demanding comprehensive background.
I had a recent review of one of our manuscripts which demanded repetition of what is now a substantial body of Introductory material across many, many papers. The two semi-district tracks of prior research we are bringing together each have enough papers and reviews to make all the points clear. I had thought that in fact we were risking someone telling us to cut a paragraph off of our Introdiction, frankly.
That's the great thing about peer review, I suppose.
You get all kinds.
Scientific publishers being told they can't keep fleecing the taxpayer so badly are basically Cliven Bundy. Discuss.
— Tom Reller, Elsevier (@TomReller) April 23, 2015
Two years after your paper is published in Journal of SocietyB send the citation report showing that it quadrupled the JIF of the JournalA that rejected it to the rejecting Editor.
Let's make this a thing, people.
— FASEB Public Affairs (@FASEBopa) March 23, 2015
The offending policy.
Of course, unless they get to the bottom of who asserted that reviewers and AEs who suggest experiments should be added to the author line we have to assume the attitude remains.