I think the biggest that should be asked is how MUCH is the curve going to change?
My team was freshman and junior heavy last year. I had five freshman who improved from 60 to 105 points this year, with an average improvement of 78.
I had five juniors who improved from 30 to 45 points each with an average improvement of 39. So for my team, a two year difference in age yielded half an improvement rate....or an exponential decrease in the number of points raised of about 70% per year.
Based on these numbers, if a player improved 80 points as a freshman, he would improve 56 as a soph, 39 as a junior, and 27 as a senior. I have a D3 team, so let's say the player enters with a 440 rating. His four year ratings at the end of each season would be:
FR: 520; SO: 576; JR: 615; SR: 642. Total improvement is 202 points.
Now let's say that we expected the same improvement of 202 points for the player but the exponential decrease was a rate of 80% instead of 70%, meaning, the rate of increase drops less per year. The improvement each year would be 69 as a freshman, 56 as a sophomore, 43 as a junior, and 34 as a senior. End of the season ratings would be:
FR: 509; SO: 565; JR: 608; SR: 642.
So the player actually is a little worse because he peaks later which means his values, although reached the same point, always trailed until the end.
I'm fine with whatever is done..I'm just wondering if the change is similar to what is listed above in which both sets of players are to reach the same final ratings.