Introductions

May 26 2015 Published by under Science Publication, Scientific Publication

The Introduction section of research articles should evolve with a maturing field.

In the early going, every new lab that jumps into an emerging sub-sub-field of investigation tends to review the same arguments from a limited set of evidence. As time elapses, review articles are written and the body of prior work pretty much covers the bases.

So Intros should become shorter and simply reference that prior material.

Particularly when the current work is as much motivated by ongoing findings as it is by the original problem or question which motivated the original sub-sub-field. 

This can be a problem for latecomers who want to showcase all of the beautiful justification and background that they have been writing up. This is especially problematic for graduate students who have been writing at dissertation length and are loathe to kill-their-babies (as the real writers say).

It is generally problematic for trainees because they have so few of their sentences, paragraphs and pages published at this point of their career.

As a reviewer I try as best I can to be tolerant to these issues. You can generally tell when a long Intro section comes from dissertation writing so I try to be kind. 

Likewise, even though I lean for a substantial Intro myself, if someone wrote sparely with clear reference to the motivating literature, well I try to resist demanding comprehensive background. 

I had a recent review of one of our manuscripts which demanded repetition of what is now a substantial body of Introductory material across many, many papers. The two semi-district tracks of prior research we are bringing together each have enough papers and reviews to make all the points clear. I had thought that in fact we were risking someone telling us to cut a paragraph off of our Introdiction, frankly. 

That's the great thing about peer review, I suppose.

You get all kinds.

18 responses so far

  • MoBio says:

    When I review a paper with a overly long Introduction (way beyond what seems essential to 'introduce' the area) I simply ask the authors to trim it down.

    Typically they do.

  • drugmonkey says:

    Do you have a single setting on this or does it evolve with a developing field?

  • Beaker says:

    Real writers say "murder your darlings."

  • MoBio says:

    @DM: not sure what you mean by 'single setting'...but essentially I use my internal 'boredom/distraction meter' to tell me when there is excessive verbiage getting in the way of introducing the paper.

  • Microscientist says:

    I recently had a reviewer who was clearly anti-review article. This reviewer wanted stand alone references for things like "This phenomenona has been seen in species A-L". A reference for each species. I assumed that this was because his lab wrote the paper on one of those species. If I just referenced the review article, he would get his citation count up.

  • drugmonkey says:

    Citing the review article robs the authors of the primary work of their due credit.

  • Ola says:

    The issue becomes more complicated when (for reasons that can only be described as Jurassic) journals impose a limit on the number of citations. When faced with reviewer comments asking to reference XYZ, often the only solution is to substitute references ABC for review article D, to stay within the arcane limit. The longer we allow such ridiculous rules to tether us, the more ridiculous the whole "publishing" affair appears.

  • E-Rook says:

    There's a great book on writing by Joshua Schimel, Writing Science: How to write papers that get cited and proposals that get funded. He has a section on the phenomenon you're describing here. That is, what "schema" already exists for the writer to use shorthand depends on the scope of your audience. The broader the audience, the fewer assumptions the writer can make about what is already knowledge taken for granted. For a sub-sub-micro-specialty journal, you can go ahead and use the short-hand, and it would even seem naive or pedestrian to describe/define things that experts in the field take for granted. So it seems like there may be a disconnect between the reviewer's expectations and the writer's with respect to the scope of the audience (not enough information to say who is correct). Interestingly, Joshua Schimel didn't say much (that I recall) about the evolution of a field and how schema's can become more firm over time and one expert's view of "accepted" and doesn't need reference (we don't cite Hershey & Chase every time we talk about DNA being genetic material) whereas a similarly positioned expert might think an assertion needs explanation/definition. I think it's interesting to watch these things play out. When I was a student, I would have been frustrated, "OMG, everyone knows this!1" and flip-side: "Read all the background stuff I learned over the past 5 years!1" But now it's, "meh .... I don't know everything after-all."

  • E-Rook says:

    Have you ever read a review article that maybe proposed a new idea, approach, or identify gaps in knowledge that should be addressed, or advocate for the use of new/different tech to address a question that had already been addressed by an old tech .... and cite that review article for its "idea" ? .... not because they discovered anything. (presumably these are the functions of review articles).

  • Morgan Price says:

    If I'm reading a paper on a topic that I'm an expert in, I skip ahead to the last paragraph of the Introduction, so I don't care how long it is. PS all those journals with reference limits should be shut down.

  • drugmonkey says:

    E-Rook: of course there is a difference between the reasons for citing a review that you are describing and the reason outlined by Ola.

  • E rook says:

    For Ola, I would engage the argument head on, and say why Ola think ABC are important. Discuss why XYZ merit citation less than ABC, but D review article would summarize ABC, they still deserve credit because D didn't add enough to the table compared to the original contributions of ABC; and let the editor or a 3rd party arbitrate the argument. Ideally, the better case would win. It may be transparent to the editor that the reviewer is advocating for his/her camp, but Ola knows what she's talking about wrt the big picture on what's worth citing. I think (devils advocate) maybe journals impose the cite limit for the same reason you've suggested being judicious in choosing what to cite in grants -- cite what's most pertinent (at least in a paper submission, you can argue your case, but in a grant, you have no chance to rebut dumbassery from reviewers).

  • jmz4 says:

    @ E Rook,
    In my experience journals count citations towards the final character/word count, and that's why they get slashed. They shouldn't count toward the limit at all, for the reasons Ola described.

  • qaz says:

    This is all very culture-laden. Some cultures want a thorough introduction, and some want a short "just the facts, ma'am" [Freberg, S. 1953].

    Of course, this difference and the corresponding differences in citation cultures is the main reason that citation count is not comparable across fields, or even sub-fields. It's obviously different between big field differences such as math (where no one cites for proofs on the page), physics (where a lot is textbook), and biology (where there is a lot of controversial data to reference), but it also changes between sub-fields such as between molecular biology (which is notorious for citing every $@# paper related to a topic) and ephys (which often cites the first article [to give priority] and a review [to point to the literature]). This is the reason that these "quantitative" measures such as citation count, h-index, and the like are really meaningless.

    Unfortunately, they're being used in many universities to judge quality. My university is now even trying to separate out "reviews" from "primary papers". Completely ignoring E-Rook's point. My take is simply that an article that provides a new theoretical idea or a new synthesis is a theoretical contribution and not a review article, whatever stupid section-header the journal puts on it.

    jmz4: A lot of journals, particularly GlamourMagz have actual citation limits and say things like "you can't have more than 30 citations". This is, of course, silly and wrong.

    Ola: I have found, however, that these citation counts are often (not always, but often) flexible, particularly if a reviewer demands an extra citation or two.

  • Ageing PI says:

    I once was on a review panel for post-doc fellowships. One applicant I was primary reviewer for had written an extensive introduction taking up 2/3 of the page limits, outlining historical context, gaps and new ideas. However, the research plan was so sparse as to be useless for evaluation. I commented that his grant was a review, not a research proposal. It scored poorly! However, it came back in the next round with a short, succinct introduction, a brilliant research plan and a note that they had submitted the original intro as a review to a good journal and had it accepted. Impressive response and grant funded!

  • physioprof says:

    Who even reads introductions and discussions when reviewing manuscripts? I just make sure the motherfuckers cited my work, and then move on to the results and figures.

  • RAD says:

    It's important to think about the audience for an introduction--as the comments indicate, if you're an expert, you're not reading most of the intro anyway. So the audience (besides reviewers) is nonexperts--students, other scientists expanding horizons--and defining the context is critical for them. But you don't want to drown them in too much information that won't have any meaning for them, either.

Leave a Reply