Posted by just4me on 3/9/2011 3:21:00 AM (view original):
While fatigue works on a linear model and it's easy to calculate the level of fatigue for your players given a certain # of PA or pitches, I'm fairly certain the actual effects of fatigue don't operate on a linear model. I've done a large bit of experimenting with fatigue for both hitters and pitchers (and detailed a large bit of it in the forums way back when) and find that the effects are relatively minimal going from 100 to 70 and from 30-0, but the difference between 70-30 is quite substantial. Not there isn't a performance drop between 100-70, there is, and a good drop at that, but a player at 70% will still put up fairly respectable numbers that, while lower than what they'd put up at 100%, are still very usable and depending on other factors can still even be competitive. The difference between a player at 70% and 30% is huge (I'll expand this more later), but again, the difference between 30 & 0% is again fairly close. There's a difference and the player at 30% will put up noticeably better stats than the player at 0%, but in the end, they're both just horrible and neither is going to help you win many (any) games.
The way I like to think of it - though these are just purely rounded example numbers - is that between 100 & 70% you have about a 15% drop in performance, from 70 to 30% you have about a 70% drop in performance, and then from 30 to 0% there is also a 15% drop in performance. This is most easily seen in pitchers as it is easier to control their fatigue levels than it is that of hitters (as it's easier to control how many pitches your pitcher throws than how many PA your hitters get in a game). Some of my early fatigue strategy tests and teams were built on the premise that these players would still perform at a competitive level at as low as 80%. And some of my games played tests for pitchers operated on the idea that a pitcher could go as low as 70% and still be relatively effective (And by that, I mean a ~1.00 WHIP turning into a ~1.20 WHIP).
As a purely anecdotal example, I drafted the 1918 Babe Ruth onto a $40m team (solely to prove a point to schwarze about the available player pool in low cap leagues in the hope of him expanding the WISC to include some lower cap leagues and to occasionally change the cap in the Exclusive Ownership league (when he ran it) to a lower cap) and was a borderline playoff team down the stretch run. Ruth was getting fatigued, but was easily my best hitter, so I slowly bumped his autorest down from 93 to 90 and then to 85. By the time we secured a playoff spot Ruth was down to the low 80s. He played throughout the whole playoffs while in the blue and down the stretch (about 20 games plus the playoffs) in which he spent the vast majority of the time blue he put up #s substantially better than he had all season. Now, I wouldn't expect that, and it was certainly an abnormality, but it still goes to show that a fatigued player can put up great numbers.
I have always believed that fatigue anywhere in the 80-99% range punishes pitchers and hitters less than you would expect and less than it should. I can't speak for whether this is true in the 70-80% range alluded to above, because I usually don't start players below 80%, but I have every reason to believe that just4me, who has studied this more than I have, is right.
I start pitchers and position players in the 90-99% range all the time, and I often start position players (rarely pitchers) in the 80-89% range as well. It's not just that I expect these players to perform better than their AAA replacements for that given game. One of the beliefs (maybe wrongheaded) guiding my draft decisions is that a great but somewhat fatigued player will perform better than his mediocre but refreshed counterpart. I believe that the majority of quality players with 550 PA's, for example, will perform a little better even when pushed to 650 PA's than the equivalently priced player who is performing at 100% at 650 PA's. In other words, I believe that, all other things being equal, a typical SIM "overperformer" with 550 RL PA's will perform a little bit better at his 650th PA than the typical, equivalently priced player with 650 RL PA's according to the way SIM prices players.
For example, the rap on Johnny Roseboro `64 as a catcher is that, with 475 PA's, he is only good for 120 games or so. I routinely start him for 550-600 PA's (140-150 games), and he still seems to show an A+ arm, if not quite his A++++ peak performance. I will similarly play Nap Lajoie `02, "intended" for only 120 games or so with 479 RL PA's, for 140-150 games, and he still bats .360/.370 with only a very modest tailoff toward the end, and his fielding range remains good, if not A+. If anything, I think I have seriously underexploited these performance anomalies because we all tend to rest players in the blue.
There is plenty of anecdotal evidence of this strategy blowing up in my face. I lost a playin game last week when my good hitting AAA prospect playing at 83% (and out of position) at 1B made an error in the 9th that allowed three unearned runs and cost me the game. WIS took Tully Sparks out of another of my playoff games with an injury on the third pitch of the game when he was pitching at 90%. And contrarian beat me in seven games of a TOC final when he took a big gamble and rested all his starters for one game to bring them back to 100%, and his more rested starters ultimately beat mine. But although these implosions were high profile, I still think they're the exception rather than the rule, although I only have these threads and my own anecdotal observations to support my thinking.