I'll use pitching as an example since things are more obvious there. Most of this is just deductive reasoning on my part, and the actual math behind it may differ somewhat.
The 1950's have one pitcher with over 200 IP and a raw WHIP under 1.00 (it's 0.99), and his HR/9 is an unusable 1.11. It's the same story with ERC, with no starters being under 2.00 (lowest is 2.24). For context, the deadball era (both decades together) has 79 starters under 1.00 WHIP and 104 under 2.00 ERC.
The question is about normalization though, and the answer is in how normalization is applied. A pitcher with a 200 ERC+ from both decades did the same relative to their season, but this does not mean that they will perform the same. Rather, it describes how they will perform relative to their year's averages (I am not sure if this is median or mean). So, the 200 ERC+ pitcher from 1950 vs 1910 will perform equally well compared to other pitchers from that year.
Additionally, the sim does not just use these normalized weights, it also looks at raw stats.
Combined, with the fact that deadball pitchers (and pretty much every other decade) has better raw stats than the 1950's, and the related fact that those league averages are worse to begin with, it renders 1950's guys really tough to use effectively unless they are extreme outliers like Jim Hearn or Barry Latman (and, even then, there are better values from a $/IP with respect to ERC# perspective).
One could argue that this should be reflected more in their $/IP (I would agree with this), but that's a different discussion.
Last callout is that most 1950's pitchers have average-at-best HR/9# rates, relative to other decades. I take this into serious consideration when I build teams to avoid dumb losses.
9/4/2018 5:54 PM (edited)