A philosophical question regarding simulations Topic

wow

lookit you go
5/2/2020 9:39 AM
I just read through the entire thread and I have to say it was super fascinating. Lots of great ideas and questions. I think all of us would agree that Brett's average of .390 was not his true talent indicator and that he was performing above his average. At the same time, I do think that WiS has to treat that as his true average in order to keep the game theory in tact and valid for WiS. I feel like as soon as you start changing a player's "true" average from their historical seasonal average the whole game theory for WiS would start to collapse. Why would anyone pay for Brett's 1980's if you know he is not going to perform to that standard and instead play more towards his historical average. In fact, why would anyone choose a season that is above a player's career average if you know they are bound to regress.

I do agree that is going to lead to absurdities that would not happen in MLB such as a .450 season average. I am not sure how you would address that without creating more problems than you are fixing. I think that is what Bill James is alluding to with "Do you model the underlying skills, or do you model the results? And, in the end, you will find that you HAVE to have respect for the actual results, or the entire process degenerates in your hands."
5/3/2020 8:56 AM
This post has a rating of , which is below the default threshold.
How many times in modern history has a player hit .400 over a 450 at bat sample, like Brett in 1980? I remember Todd Helton hitting .400 as late as August, and Tony Gwynn hit .394 in the strike year. There are probably many other examples that happened in the middle of a season or across two seasons. You can chalk it up to sample error, but Brett DID hit .390 that year. I’d say there’s a decent chance he had some other stretch in his career that at least approached that figure, too. There are extraneous factors that are near impossible to control for, so that’s the data point that we have to use.

Therein lies the philosophical aspect of this debate - do we want the sim to adjust every season to a players true ability, if it’s even possible or accurate to smooth out longer-term multi-year to represent one season. Or do we accept the sample error for one season, and look at 1980 George Brett as its own entity? I am in the latter camp. In my mind, it’s “1980 George Brett” not “George Brett, adjusted for his 1980 performance”. Neither is the right answer; hence what much of the disagreement here stems from.

One other point - many cite that Brett averages .365 in sims, and somehow think that supports their argument that his ability level is lower than .390 (for what it’s worth his avg# is .386 and his hit rate is .375). However, I find that almost every played underperforms their adjusted figures in the sim, because the competition level is higher. 1980 George Brett didn’t have to face Addie Joss with Max Carey patrolling center field, which opens up a whole other can of worms in the debate, but I think we can agree will depress his average sim performance. You can’t say that his performance in sims justifies a lower salary calculation or productivity expectation, because it’s all relative.
5/3/2020 3:18 PM
More succinctly, a perfect confluence of factors, the least of which certainly not being Sample Error, led to him hitting .390 in 1980, and philosophically, I want my simulation to be able to utilize the player who produced that outcome, as erroneous as it may be.
5/3/2020 3:21 PM
I have greatly enjoyed reading this thread. And while it makes my head spin, I love reading these types of discussions.

As the one who brought up his sim avg being .365, I wasn't attempting to say that was his "true skill" or anything of that sort, but I just found it...interesting. You're correct tpistolas that most players (if not all) averages in the sim likely under perform their RL stats due to the many circumstances you mentioned above.

I wonder though...and this is just a thought as someone who wasn't here during the implementation of the dynamic pricing, but now see it for being very unpopular among those I see on here...if there may be a way to use dynamic pricing, not based on usage, but based on average performances. This could be a horrible idea, but for Brett 1980, instead of paying for his .390avg at $9m, and then getting penalized for using him and now having to pay $15m, his average stats would be calculated after every month (maybe guidelines/restrictions could be set in place to prevent owners from attempting to game the system but under utilizing or severely over utilizing in order to lower costs..idk) - and so instead of paying $15m for a guy who averages to be a .365 hitter, you could pay $7-$10m (other +/- .365 hitters with 20-30hr cost between $5.5m-$10m).

Idk - this could be nonsense lol
5/3/2020 5:48 PM
I've enjoyed the discussion as well. Contrarian, I think we're in agreement that Brett's season in 1980 was certainly an outlier, but we disagree not only on whether we CAN control those variable outcomes (I'd argue that using other seasons to smooth the performance may only explain SOME of the variation) but also if we SHOULD. I completely see your side about a .420+ outcome being illogical if we want an accurate simulation, but my personal opinion is that the wide distributions of outcomes in WIS is part of why I love it. In fact, there's a 1980 Brett in an OL that I'm in right now, and he's hitting .467 57 games in...and I'm enjoying seeing just how high he can finish.

I like Ryno's idea of dynamic pricing factoring in past sim performances. I don't know how dynamic pricing worked in the past (it occurred long before I frequented the site), but was this tried at some point? I feel like it must have been, especially with the prices for some of the old favorites like Miguel Dillone, Elton Chamberlain, Addie Joss, etc...
5/3/2020 7:40 PM
I also like chargingryno's idea of how to make dynamic pricing work better - link it to the performance history, not the usage (I never trusted supply and demand anyway as you all know by now but that is another argument).

contrarian23, based on your convincing difference with Bill James' methodological principle, and since I lack your math skills, I have a question or two:

If I remember correctly, WIS does not base player capabilities in relation to the historical averages of MLB seasons, but rather if 1921 Babe Ruth bats against 1963 Sandy Koufax, in an order that I know is posted here somewhere but at the moment I forget half of the realm of possibile outcomes context is 1921 outcomes and half is 1963 so to speak with the specific Koufax and Ruth performances via their actual stats encountering each other in that mixed context (including the years of the fielders behind Sandy as well) and then adjusted for playing in Dodger Stadium, Yankee Stadium, Coors etc.

But...what if an algorithm for determining player performance were instead based first on calculating the overall historical averages for major league baseball - it would have to be determined whether to start in 1871, 1885, 1901 or whatever year seemed right, then adjusted for the differences in numbers of teams in each era - but then players performances were adjusted for how many standard deviations from that overall historical average their season was as a whole, AND adjusted for park effect that year they played, and adjusted to a 162 game seasonal performance that knew to reduce 450 AB performaces to how these usually then turn out over a full season? (maybe also adjusting the all-time average pre-1947 to the post-integration average as well).

This would then produce an essentially NEUTRAL George Brett 1980 - Brett with the hitting performances of 1980, playing in KC on artificial turf compared with the historical norm - probably way above average hitter, but as with Hugh Duffy in 1894, brought down to earth somewhat. That would be a true level playing field for all of the players and pitchers in history (let's remember the still real advantage of deadball pitchers here, which remains for me the biggest single problem with realism on this site)

I don't know if such a thing is practically feasible, but as a philosophical position in our philosophical discussion, it seems to me that this is what we are looking for in another form- how can we know what it reasonable for Brett to do if he had to play in 1927? In 1883? in 1968? in 1944? in 2000?

And, a system based on adjusting different eras by reeling in their excessive deviations from the historical mean as a norm would allow for setting the limits on how many deviations are allowed - so no pitching 800 innings, no hitting .450.

5/5/2020 8:41 AM
I've enjoyed this thread, and read through it many times. I'm torn. Mostly I fully agree with Contrarian, but I also have this nagging about what this simulation is trying to accomplish. In one sense, it is a historical simulator. If you put a 1980 replay league together, Brett is going to mostly come close to his RL performance from that season barring running into fatigue issues with pitchers or himself. As many have pointed out her, in an OL, he typically hits closer to .365 due to the competition being stronger. So, while yes, the variation that can occur by treating Brett as a .390 hitter could lead to some extreme cases where he hits at or near levels never actually seen in MLB, the significant majority of the outcomes are going to be within a relatively small range around his actual RL performance. And that's mostly what is trying to be portrayed.

I think it's part of the "what if" factor. "What if you could put the 1980 Brett (luck & all) against the 2018 deGrom (luck and all)? I feel like the question of "what if" for a large portion of our questions includes the assumption of the RL luck impact. What if '73 Davey Johnson got to hit in Coors Field? No one ever thought of Johnson as a 40-50 HR guy, but with the way the ball fell for him that season, we wonder how that same luck might have played out in an environment that might've even done more to help him.

For my own understanding of a players talent I fully agree on Brett not being a true .390 hitter or even a .365 hitter. But for what WIS brings to the table, I want to know what that lucky version of Brett would be like in these other scenarios. And I know I may be compounding luck with luck and getting some weird or unrealistic outcomes, but part of that is what makes it interesting to me. You get that good luck and put in an environment where that luck can encounter the bad luck of another player in a run scoring environment and viola, Brett could've been a .450 hitter with the same luck he already in RL had he played in Coors field against the 2006 Reds pitching staff or Davey Johnson maybe hitting 80 HRs in Coors against the 2019 Orioles if he had the same elements of luck...

There's definitely some gray area... where there's a cross section of trying to do two different things. Answer "what if" and also be realistic. I want both and sometimes that "what if" includes the built in luck.

(Not to mention the difficulty of extracting the true talent level of a huge portion of the player population whose sample sizes are much smaller than Bretts. This concept might work better on the career stats for the Diamond Legends players, where players who meet the qualifications for a season there can have their single season stats normalized to a career factor, in addition to the standard normalization algorithm.)
6/5/2020 3:23 PM
◂ Prev 1234
A philosophical question regarding simulations Topic

Search Criteria

Terms of Use Customer Support Privacy Statement

© 1999-2024 WhatIfSports.com, Inc. All rights reserved. WhatIfSports is a trademark of WhatIfSports.com, Inc. SimLeague, SimMatchup and iSimNow are trademarks or registered trademarks of Electronic Arts, Inc. Used under license. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.