Ticket response (after well over 2 weeks mind you) Topic

Posted by noah23 on 2/6/2014 11:37:00 PM (view original):
I dunno slid.....all I know is that the responses (and there have been three separate ones) have only shown one thing. Oriole's version of acceptable deviation in results and mine are so far removed from each other, it seems it will never even be attempted to be bridged.

He is proud of the standard deviation in this engine. I guess he doesn't care about the outlier numbers. What I care about is the outlier numbers. They should not exist, or they should be 1 in 100 not 3 in 10. I want this game to work. I want to play this game. Its clear I no longer understand anything about it, and what I now understand about results I do not like. These deviations are happening with NO INJURIES. What happens when injuries are introduced? is the worst team in the league going to be able to compete with the best randomly?

I'm not a statistician by any means, but I *think* you are looking at the numbers wrong.

In a small sample size - and 10 games is VERY small - there is a much higher chance that your distribution of results will be skewed - the sample size is not significant enough to overcome the "random" nature of running a simulation. Plus, I don't think you can take the spread of one results (home team scores) and add them to the spread of a second set of result (away team scores) to get an "overall" spread.  The two sets of scores are mostly independent of each other (I say mostly because a higher score by one team could result in more possessions for the other team.)  You could take a look at score spread per game, but again, the sample size is so small as to be statistically insignificant.  The more games you have, the more reliable feel you for how the engine works on the whole.

The standard deviation SHOULD be what you are looking at if you are wanting the game to produce similar results each time the sim is run. 
2/7/2014 12:54 PM
Posted by noah23 on 2/5/2014 1:55:00 AM (view original):
My problem with this game in 2.0 was just what was illustrated in the 10 game test he did.

High score Bridgewater  52
Low score Bridgewater  26
High score Paterson 34
Low score Paterson 3

That is a 57 point deviation over 10 games. Invariably, there are deviations in football. The closer in talent the teams are, the more deviation there likely is. These are largely based on environmental factors and injuries (which neither are present in this game atm). I would argue if all the variables, weather, injuries, etc were held constant the standard deviation in a football game would be very slight. But lets just say we are trying to simulate those variables (outside of injuries which eventually will be reintroduced). I am not comfortable with a deviation of 57 points in the engine. I'm just not. 28? Maybe. 35? stretching it. 57 points is just plain random (without injury factors)

I'm not sure I understand the math here....but I think it is flawed....which might explain why I don't understand it.

I think your definition of "deviation" is not proper.  It is certainly not a "standard deviation" as used in statistical analysis.
2/7/2014 5:54 PM
The standard deviation is likely what oriole said it was. What I'm saying is that the standard deviation doesn't tell the story. I don't care how well you deviate something if there is a 57 point spread between high and low scores in a game.
2/7/2014 7:25 PM
Pretending outliers don't exist doesn't make them disappear
2/7/2014 7:25 PM
Posted by noah23 on 2/7/2014 7:25:00 PM (view original):
Pretending outliers don't exist doesn't make them disappear
True, but ignoring the rest of the data population doesn't make them more important than they are, either.

You keep talking about a 57 point spread.  There isn't one.  There's a 21 point spread in the results for one team, and a 26 point spread in the results for the other.  The data sets are two distinct populations, and so you can't "add the spreads".

The reason a "normal distribution curve" looks the way it does is because there are less instances of the data on the left and right edges of the curve - what you are calling "outliers".  The curve is higher towards the middle because there are more instances of data as you get closer to the median value.  And that's what you want to see where you expect the simulation to present consistent results.  
2/7/2014 11:47 PM
I have fun playing this game.

(although I do enjoy these types of discussions among statisticians and engineers - of which I am one)

Just sayin'.
2/8/2014 12:01 AM
Posted by bhazlewood on 2/7/2014 11:47:00 PM (view original):
Posted by noah23 on 2/7/2014 7:25:00 PM (view original):
Pretending outliers don't exist doesn't make them disappear
True, but ignoring the rest of the data population doesn't make them more important than they are, either.

You keep talking about a 57 point spread.  There isn't one.  There's a 21 point spread in the results for one team, and a 26 point spread in the results for the other.  The data sets are two distinct populations, and so you can't "add the spreads".

The reason a "normal distribution curve" looks the way it does is because there are less instances of the data on the left and right edges of the curve - what you are calling "outliers".  The curve is higher towards the middle because there are more instances of data as you get closer to the median value.  And that's what you want to see where you expect the simulation to present consistent results.  
The outliers ruin the sim. Thats what I'm saying.
2/8/2014 12:43 AM
I would say the biggest problem with the engine is that it is too consistent.  IMO, advantages are systematically whittled away by the engine structure mainly because they are not based on a single distribution curve.  Because of this, results all get pushed towards the middle or the results curve.  The engine cannot be tweaked to say if the offensive line is 1 standard deviation better than the defensive line (all else being equal), the offensive line will perform one standard deviation better than average.  You can't say because your RB is one standard deviation worse than other RBs that he will perform one standard deviation worse relative to other RBs.  

This is my understanding of what happens:
The play is broken into several steps and each step has three distributions: even, slightly better/worse, much better/worse.  It's not entirely clear to me that there aren't severe overlap problems within these three distributions.  I think it's highly likely that, even if you fell into the even category, you could get a good roll and wind up with the same outcome as much better/worse and move onto the next step.  Now this happens over several steps in a play, so the results get even more muddled.  There's no way to drill down into results to fine tune it without experimenting and possibly screwing up everything else because of the overlap.  It's a multi step, multi distribution result that may make sense to a programmer, but doesn't make sense from a statistical or modeling slant.

This has been my issue with the engine structure since inception.  Using steps and buckets muddles everything and increases the chances of having very similar outcomes on a play by play basis.  It's also more difficult to make changes because you can't isolate particulars without having to adjust everything else to ensure that you're not screwing something else up with the correction.
2/8/2014 1:03 AM
Slid64er nailed it.  Looking through every Dev chat and forum post that I can since inception, this is pretty much it.  At this point, Oriole seems to be shoe-horning fixes and changes into the engine to get it to do things it was never built to do.  I always assumed that when they designed 3.0, they would be building it from the ground up but looking more and more at the results, I believe they simply started with their base from 2.0, stripped it down and added back onto it which is a poor way to program and causes all sorts of unforeseen issues.

What WIS should have done is simply divide the game into two parts.  The UI that everyone here sees and the Engine that runs the algorithms and spits out outputs to feed into the UI.  They could then make changes to each independently of each other without screwing up the other part. 
2/8/2014 9:53 AM
Posted by jtd79 on 2/8/2014 9:53:00 AM (view original):
Slid64er nailed it.  Looking through every Dev chat and forum post that I can since inception, this is pretty much it.  At this point, Oriole seems to be shoe-horning fixes and changes into the engine to get it to do things it was never built to do.  I always assumed that when they designed 3.0, they would be building it from the ground up but looking more and more at the results, I believe they simply started with their base from 2.0, stripped it down and added back onto it which is a poor way to program and causes all sorts of unforeseen issues.

What WIS should have done is simply divide the game into two parts.  The UI that everyone here sees and the Engine that runs the algorithms and spits out outputs to feed into the UI.  They could then make changes to each independently of each other without screwing up the other part. 
Actually, I believe the UI *is* independent of the game engine.  

More than once, I've called for a total re-write of the game engine, including the overall design of how it works.  
2/8/2014 12:37 PM
Thanks Bob, I wasn't sure if that had been confirmed or not.  Either way, it really shouldn't be that difficult for a complete rewrite of the engine with testing to take more than 6 months.  They control all of the variables that can be entered into the algorithms which lowers the chance of variance significantly.  Even a single good programmer should be able to do this but they would need a sold tester to help them out running sims to look for outliers and bugs.

While this is really not that difficult, I have a feeling that WIS tends to force the programmers to divide their time on continuing to fix bugs in the current engine, work on other projects within the WIS universe, etc.  If they could simply let a programmer focus 100% on building something from scratch until he was done, this game would flourish because of it.
2/8/2014 1:08 PM
◂ Prev 12
Ticket response (after well over 2 weeks mind you) Topic

Search Criteria

Terms of Use Customer Support Privacy Statement

© 1999-2024 WhatIfSports.com, Inc. All rights reserved. WhatIfSports is a trademark of WhatIfSports.com, Inc. SimLeague, SimMatchup and iSimNow are trademarks or registered trademarks of Electronic Arts, Inc. Used under license. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.