This discussion recurs on a regular basis and, as best I can tell, so far no one has altered their opinion a bit. So its something of a mystery why it keeps going. For me, everything people write about is consistent with random probability variations. crazy's example of a set of coin flips is right on. You can have seemingly incredible runs of heads or tails where the only explanation is probability. Also, while baseball pundits frequently write about how the LONG baseball season evens out probability bumps and insures that the best team prevails, that's just not true. In terms of probability measurement, a set of 162 outcomes is actually a relatively SMALL sample size. Ideally, you would like to have millions of iterations before you conclude that the long term effects of probability aberations have been reduced to near zero. Of course as the great economist J.M. Keyes observed many years ago, in the long term we're all dead so those kinds of data runs in the real world are not practical.
I'm too cheap to try this out but I suspect that an open league of 24 identical teams with identical managerial settings would show the sort of winning percentage variations you see at the end of most real baseball seasons. A handful of teams would excel, a handful would blow and the majority would cluster on either side of .500. There would be a number of long winning and losing streaks. By contrast, if you programmed the computer to play a 100 million game season, the end result would be all teams right around .500. The difference would be solely the result of sample size.
Ultimately this is all a matter of belief. Neither side can amass the sort of data that would definitively answer the question. Kudos to grizzly for acknowledging this. My belief is different than his but no better supported by data or argument.