My initial mindset on reviewing a manuscript is driven by two things.
First, do I want to see it in print?. Mostly, this means is there even one Figure that is so cool and interesting that it needs to be published.
If there is a no on this issue, that manuscript will have an uphill battle. If it is a yes, I'm going to grapple with the paper more deeply. And if their ARE big problems, I'm going to try to point these out as clearly as I can in a way that preserves the importance of the good data.
Second, does this paper actively harm knowledge?. I'm not as amped up as some people about trivial advance, findings that are boring to me, purely descriptive studies, etc. So long as the experiments seem reasonable, properly conducted, analyzed appropriately and interpreted compactly, well I am not going to get too futzed. Especially if I think there are at least one or two key points that need to be published (see First criterion). If, OTOH, I think the studies have been done in such a way that the interpretation is wrong or clearly not supported...well, that paper is going to get a recommendation for rejection from me. I have to work up to Major Revision from there.
This means that my toughest review jobs are where these two criteria are in conflict. It takes more work when I have a good reason to want to see some subset of the data in print but I think the authors have really screwed up the design, analysis or interpretation of some major aspect of the study. I have to identify the major problems and also comment specifically in a way that reflects my thinking about all of the data.
There is a problem caused by walking the thin line required for a Major-Revision recommendation. That is, I suppose I may pull my punches in expressing just how bad the bad part of the study really is. Then, should the manuscript be rejected from that journal, the authors potentially have a poor understanding of just how big the problem with their data really is. Especially if the rejection has been based on differing comments between the three sets of reviewers. Sometimes the other reviewers will have latched on hard to a single structural flaw...which I am willing to accept if I think it is in the realm of 'oh, you want another whole Specific Aim's worth of experiments for this one paper, eh?'.
The trouble is that the authors may similarly decide that Reviewer 3 and Reviewer 1 are just being jerks and that the only strategy is to send it off, barely revised, to another journal and hope for three well-disposed reviewers next time.
The trouble is when the next journal sends the manuscript to at least one reviewer that has seen it before....such as YHN. And now I have another, even harder, job of sorting priorities. Are the minimal fixes an improvement? Enough of one? Should I be pissed that they just didn't seem to grasp the fundamental problem? Am I just irritated that IMO if they were going to do this they should have jumped right down to a dump journal instead of trying to battle at a lateral-move journal?