There seems to be a sub population of people who like to do research on the practice of research. Bjoern Brembs had a recent post on a paper showing that the slowdown in publication associated with having to resubmit to another journal after rejection cost a paper citations.
Citations of a specific paper are generally thought of as a decent measure of impact, particularly if you can relate it to a subfield size.
Citations to a paper come in various qualities, however, ranging from totally incorrect (the paper has no conceivable connection to the point for which it is cited) to the motivational (paper has a highly significant role in the entire purpose of the citing work).
I speculate that a large bulk of citations are to one, or perhaps two, sub experiments. Essentially a per-Figure citation.
If this is the case, then citations roughly scale with how big and diverse the offerings in a given paper are.
On the other side, fans of "complete story" arguments for high impact journal acceptances are suggesting that the bulk of citations are to this "story" rather than for the individual experiments.
I'd like to see some analysis of the type of citations won by papers. All the way across the foodchain, from dump journals to CNS.