Honestly, I actually liked most of mushashi's article.. except for the one part where the guy they interview says "let's just get this out and worry about the rest later." That's exactly the point that those who argue against the a closer's importance are missing.
The biggest problem with using the #1 guy in the 6th or 7th inning to stop a rally or "get a key out" is 1) the other team still gets chances to bat and 2) your team still gets to bat.
a) In the next half inning (or innings if it's as early as the 6th) your hitters can and will score additional runs a statistically-definable percentage of the time, which therefore invalidates the importance of the spot in the previous inning. That uncertainty principle violates the overall importance of that spot over the course of the long-run. Once again to make a blackjack analogy (because I dealt blackjack part-time for a year...doing something 20,000+ times allows for a different perspective on the frequency of outcomes amongst a sample size), let's say you had a trump card where you could play a 10 whenever you wanted and you're allowed to save the 10 for a key spot, but you're only allowed to play it once. In this analogy, the inning you are in relates to your total. So if you have 10, you can play it now and get to 20 and that's pretty good, you definitely might win, and you especially might want to take it if you have a big bet and the dealer shows 10. However playing 10 on 10 can never be 100% perfect because you are not last to act, and because the dealer has a hidden card AND can still draw, the final outcome can never be known. There will be statistically-definable percentage of instances within this subset where you won by too many and a stastically-definable percentage of instances, no matter what the dealer's upcard (2, 6, 10, whatever) where the dealer will pull 21. Because your trump card is played without knowing the final outcome, it is inherently wrong. I'm sure you can relate the analogy to our bullpen example.
Purely from a numbers standpoint, baseball is broken up into defined units (outs) broken up into equal segments (innings). Outs are of limited commodity, and become more and more limited as the game goes by. At the beginning of the 7th inning, the next out is 11% of an opponent's remaining outs, the next inning 33% of a team's innings. At the beginning of the 9th inning, the next out is 33% of a team's remaining outs and 100% of a team's remaining innings, so therefore one out in the 9th is worth the entire 7th inning. This dynamism of baseball relates to some of David Sklansky's game theory regarding poker tournaments. As a player's chip-stack decreases relative to the stacks of other players, the value of one individual chip for that person is much higher relative to the value of one chip belonging to the largest stack. Most players do not realize this important mathematical oddity, especially towards the end of a tournament, and this is why short-stack players virtually always short-change the value of their stack as part of deals at the end of tournaments. Players assess value of their chips in a vacuum. Similarly, the relative importance of outs increase at the end of baseball games. These ideas can be and have been proven to be fact and cannot be debated; these are not my opinion. Sklansky and his associates went to MIT and reproduced their findings time after time; he probably knows what he's talking about compared to you or me or Baseball Prospectus or any of us.
Subsequently, the value of an out in a 7th inning bases loaded 1-run jam can be weighted relative to a 9th inning spot with the bases-empty and 2 outs (or at the beginning of the 9th with the bases empty.) You can weight the value of that 7th inning spot to argue that it is a very important spot and may end up being the most important spot from a results-oriented perspective, but via the uncertainty principle, the best reliever should not come in unless his success or failure will in fact absolutely define the outcome. You may be able to define a 7th inning out as the key out of a game if and only if that definition is done in retrospective after the game is over. Your spot may be best that one time, but over the course of thousands of thousands of equal trials, it is sub-optimal game theory.
So now let's do some algebra and say, as an example, over the course of a season, you use your best guy in 50 games for 100 innings (6th and 7th) and I use my guy 75 games for 75 innings (9th only and in "save" situations only), you can weight the value of your innings and I can weight the value of mine and my weighted value will be higher.
50 [full 6th-innings] x .25 [value of a full 6th inning] + 50 [full 7th-innings] x .33 [value of a full 7th inning] = 12.5 + 16.5 = 29.
50 [full 7th-innings] x .33 [7th value] + 50 [full 8ths] x 0.5 [full-8ths] = 16.5 + 25
75 [full 9th innings] x 1.0 [value of a full 9th inning] = 75.
(also, every inning past the 9th has a progressively-higher value than 1.0 by extrapolating an inversely proportional factor of the values before the 9th inning ... )
You would have to complie all 150 innings for your total to be more valuable than my 75.
Because of this, my guy can actually have worse numbers in a vacuum yet still be more valuable because the conditions in which your guy obtained his stats (your innings) were less valuable; your innings would have been obtained under an element of future uncertainty that can (and sometimes do) invalidate your guy's success. Also, I spread my guy out over more appearances and allowed him to impact a larger volume of outcomes than your guy did. Furthermore, saving the player isn't terrible; the closer's potential energy is just as valuable as his kinetic energy.
To extend this even further, you could make a rebutal that this algebra de-values starting pitching because 1st innings are worth so little. My rebuttal to that is that it precisely qualifies (and quantifies) the importance of "innings pitched" to a starting pitcher's overall value. The value of a guy like Justin Verlander or Clayton Kershaw is defined entirely by the volume of innings they pitch above & beyond the industry standard of 6.0 with 3 or fewer earned runs. JV and CK are constantly pitching into the 7th, 8th, and 9th innings with unbelievable efficiency, and that 7th,8th,9th inning gravy contains virtually all of their (nearly infinite) VORP. 33 games started x 6.0 = 198 innings pitched. Last season Kershaw threw 236 innings. Those 38 extra innings are not just 38 innings in a vacuum, they are 38 weighted innings of disproportionately (re: exponentially) higher value than the value of the first 198. This concept (among others) contributes to the all-time sabermetric value of, say,1997 Pedro Martinez relative to the industry standard.
Back to your guy, let's say your guy gets out of that jam like he is supposed to and then your team immediately extends the lead by 2-3 runs, is the plan to commit to his usage or to take him out because his next inning is not as valuable? If the latter, then you've wasted a valuable asset. Who do you turn to if something goes wrong? If you commit to your closer for the remainder, you win the sh*t out of this one particular outcome, but he will tire and may be unavailable tomorrow while he recovers and when you need him in a crucial spot (the uncertainty principle once again applies) you have a higher % of losing that outcome. Conversely, if you take him out, the opposition now has a better chance (albeit slight) of rallying relative to if you had just played your 2nd-best reliever in the key spot.
I want to point out that I'm not necessarily debating any of the numbers you've reproduced from retrosheet or whatever, I just want to point out the importance of 11%, 9%, and 4%, and 2%. Those edges are statistically significant (re: blackjack !!), and in retrosheet's case, they are a subset of results that could have been derived from false pretenses and not necessarily those of optimal game theory. The statistics could have been arrived upon as a result of incorrect decision-making and are therefore not true. Proper game theory could slant those numbers from the observed outcomes and widen your advantage. Question everything. Test your own hypotheses. Prove your own solutions.
1/23/2014 5:54 PM (edited)