Just yesterday in a comment on the 2011 WNBA All-Playoff Team post, Shannon correctly pointed out that I had sort of loosely tossed out the term "plus/minus" without sufficient explanation.
The confusion, as I described briefly there, was that there are two forms of plus/minus currently publicly available to WNBA fans: the "raw plus/minus" that shows up in the boxscore and the "net plus/minus" provided by the Minnesota Lynx stats site that I normally use in my posts on this site.
As it turns out, my insufficient explanation - and Seimone Augustus' surprisingly negative playoff net plus/minus - presents an excellent opportunity to discuss the value of the metric before introducing a third type of plus/minus to WNBA fans: regularized adjusted plus/minus (RAPM).
Augustus' playoff numbers provide a great starting point to understand the benefits and limitations of the various forms of plus/minus ratings as well as why RAPM is an improvement over either of the other two types that we're used to using for the WNBA.
In broad terms, what plus/minus measures is a player's impact on the score of a game, with positive numbers indicating their team outscored the opponent while they were on the floor and negative numbers indicating their team was outscored while on the floor.
But each one of these ratings accomplishes that in a different manner and it helps to understand each one to grasp how to best use the others.
So here's a step-by-step primer.
Raw/On-court plus/minus (OCPM)
You'll see both "raw" and "on-court" plus/minus used to describe a few different things. But for in the interest of clarity here, OCPM is the form of plus minus that you'll see in boxscores (between "FTM-A" and "OFF" at WNBA.com). It is just a reflection of a team's point differential when a player is on the floor - Augustus' +9 below simply means that the Lynx were 9 points better than the Atlanta Dream in Game Two of the 2011 WNBA Finals.
The Minnesota Lynx' boxscore from Game Two of the 2011 WNBA Finals in which Augustus scored 36 points in spectacular fashion.
OCPM is a great starting point to figure out which players might have had an impact on the game that didn't show up in the standard counting stats. But it really should only be used as a starting point.
Kevin Pelton of Basketball Prospectus (and StormBasketball.com) described OCPM as "descriptive but not ascriptive" - for example, while it is undeniably true that the Lynx were three points better than the Atlanta Dream in Game Two of the WNBA Finals when Augustus was in the game, we cannot necessarily say that Augustus is individually responsible for that advantage.
The reasons it's hard to hold a player accountable for their OCPM might be obvious. As hockey fans might be aware, good players on losing teams incur negative OCPM ratings because, well, their teams lose a lot. But there are other conceptual problems too, as described by Steve Ilardi in 2007 at 82games.com.
Unfortunately, however, the plus-minus stat doesn’t always fare particularly well in the messy real world of NBA basketball. For one thing, some players spend most of their time on the court in the company of very good teammates, while others frequently play in tandem with much weaker players. The plus-minus stat doesn’t account for these inequities at all. Likewise, some guys always find themselves matched against the opponent’s best players, while others more often face the opposing team’s second unit.
So we can conclude that OCPM does provide useful starting point in trying to get the full picture of who impacted a game, but is not a good way of determining player quality and certainly not their long-term impact on a team because it's far too volatile game to game depending on how all of those contextual factors shake out.
Net plus/minus (NPM)
Net +/- picks up where OCPM left off by taking account of a player's impact on the point differential when they're both on and off the court. It does that by doing two major things differently than OCPM in addition to looking at on and off court numbers:
- Takes account of team and opponent performance when a player is on and off the court, to track how much a player affects what a team scores and allows.
- It uses a per minute rate - points per 40 minutes - instead of raw points to describe performance.
So the formula for net plus/minus is as follows:
(tm pts/40 on floor – opp pts/40 on floor) – (tm pts/40 off floor – opp pts/40 off floor)
The full NPM numbers for the regular season and playoffs dating back to 2006 are available on the Minnesota Lynx' net plus/minus site (that has a link on the left sidebar of this site), but let's stick with Augustus for now, for the sake of continuity, and look at her 2011 WNBA playoff numbers:
(86.7-81.9) – (81.2-46.0) = 4.8 on - 35.2 off = - 30.4
So what NPM does is two things:
- Measures offense and defense;
- "Tests" a player's value by considering how a team does without them (if the team consistently falls apart when a player is off the floor, that says something).
WNBA fans will probably immediately balk at the idea that the Lynx were 30 points worse with Augustus on the court given what we witnessed during the playoffs (e.g. wicked crossovers, generally balling out of control, and ultimately winning 2011 WNBA Finals MVP).
"I'm 'bout to put y'all up on somethin' else...Augustus with a -30.4 is WHOA..." (in the negative sense...wait...)
The thought that Augustus was the second-worst player on the Lynx in the playoffs would appear to defy common sense.
The Minnesota Lynx' 2011 WNBA playoff net +/- ratings, available here as a .pdf.
And back to the original point, it's a great example of the limitations of net +/-:
- First, this quite starkly illustrates the difference between OCPM (+32) and NPM (-30.4), above caveat notwithstanding.
- Second, Augustus, was only off the floor for 55.6 minutes in 8 games (6.95 mpg). That the Lynx were 30.4 points per 40 better for the 7 minutes she was off the floor doesn’t say much about anything except that the Lynx did an outstanding job defensively when she was off the floor - you'll note they held opponents to 35.9 points per 40 less when Augustus was off the court despite being about 5 points worse offensively. In any event, the fact is that doing well for 7 minutes a game without Augustus in a 40 minute league isn't proof of much.
- The lacking minutes off the court serve to illustrate a second short coming of any plus/minus rating, as described by Dan Rosenbaum in the New York Times back in 2005: "Stable results take more than half a season; stable box-score statistics require only about 10 games." Rosenbaum was referring to more than half a NBA season or 41+ games. Augustus only played 42 40-minute games if we were to combine 2011 regular season and playoff games. So while that's not a reason to dismiss the whole thing, it's definitely a reason to remain cautious.
- A third limitation is what John Hollinger once described as "the backup effect": "If a player has a terrible backup, then his net plus-minus will look good because the team will play so poorly with the sub on the court. Similarly, players with high-quality backups won't show nearly the same disparity."
Chris Ballard once described NPM as, "basketball's version of hockey's plus-minus ratio with a few esoteric twists." And by this point, you might wonder if all those twists are even worth it (or why on earth you'd bother with some "regularized" esoterica). But stick with it - what plus/minus is good for is all the intangible contributions a player brings to the floor over the course of a season. It still provides a pretty solid look at which players are making the biggest impact on their teams when they set foot on the floor, particularly for starters - for all the caveats, a highly positive plus/minus does say something about a MVP candidate, for example.
So to put NPM in perspective, the following are 2011 NPM leaders.
2011 WNBA net plus/minus leaders, available here as a .pdf.
And what makes regularized adjusted plus/minus powerful is that it minimizes the impact of all those caveats.
Regularized Adjusted Plus/Minus (RAPM)
As the label might imply, RAPM adjusts for the above problems in plus/minus, which takes a bit more complex math that there's no simple way to describe. But to summarize, it basically controls for the backup, bad team, and teammate effects mentioned above to actually give us a metric that says something about the value of a player. Eli Witus has described adjusted plus/mimus (APM) well at Count the Basket:
Adjusted plus/minus is a way of rating players first developed by Wayne Winston and Jeff Sagarin in the form of their WINVAL system (more here). The basic idea is simple. For each player, it starts with the team’s average point differential for each possession when they are on the court (sometimes referred to as the player’s on-court plus/minus). This gives a number showing how effective the player’s team was when they were in the game...Adjusted plus/minus uses regression analysis to control for these biases by controlling for the quality of the teammates a player played with and the opponents he played against.
So yeah, regression analysis. I'll defer to Witus detailed explanation of that math (click here) and Daniel M. has a great summary of all the different kinds of plus/minus adjustments and the specific regression used for RAPM.
But we still haven't regularized that ish yet.
Sorry, I had to.
APM still presents three problems summarized below (and outlined in more depth by Daniel M. here):
- There's still that problem cited by Rosenbaum above about number of games or "sample size".
- Substitution effects or "collinearity": if two players come in together all the time, it tells us about the pair, not the individual.
- Similarly, if two players are always substituted for each other, it tells us about the players relative to each other, not the team.
So "regularizing" - unless you'll allow me to sing that we have to "regulate" these numbers - tries to reduce the latter two of those effects. Yet the sample size problem still remains. To further adjust for that, Jeremias Engelmann has looked at NBA RAPM over a longer time span of 10 seasons. Although players and teams obviously change over that longer time span, that's actually a good thing - it becomes a far more powerful measure of how good a player is relative to his peers over the long-term in a variety of contexts and situations, minimizing all three of the bulleted problems above.
And this weekend he contacted me about doing the same for the WNBA.
You can find the four-year rating for every player from 2008-2011 here, but what follows is a look at the top 10 players.
(Remember, plus/minus - adjusted or otherwise - is telling you a player's impact in terms of how the team fared while she was on the court compared to when she was off of it. To quote Daniel M., "If the team does better with the team on the court than off, the player is probably pretty good. If the team does better with the player on the bench, the player is probably not as good." My emphasis on the probably.)
|Name||Offense per 100||Defense per 100||Off+Def per 200|
Four-year WNBA RAPM leaders (2008-2011).
* = Best in WNBA
As a means of evaluating to what extent this metric passes the "smell test", I'm going to compare it to some numbers that we might be more familiar with: PER, which James Bowman has already outlined previously.
No boxscore-based metric correlates with RAPM well, but PER is arguably the closest in the NBA and Basketball-Reference.com provides those numbers for every season in the WNBA. This is not at all saying that either is "correct"; they are two very different metrics. But it does give us a common frame of reference to think through things:
|Name||2008 PER||2009 PER||2010 PER||2011 PER||4-year avg PERER|
Four-year PER for top ten WNBA RAPM players from 2008-2011 via Basketball-Reference.com.
* = Player missed significant time due to injury or absence.
** = Average does not include 2009 season.
Obviously, there are some players on this top 10 list that WNBA fans will find puzzling, just using PER as a proxy for "conventional wisdom":
- Nakia Sanford: Sanford, as noted above, has been the top defender in the league over the last four years, according to RAPM. Phoenix Mercury fans (and Seattle Storm fans, due to opposite affect) might nod in agreement with that after the 2011 playoffs. But more important to note is that the next four best defenders by RAPM are also on this list - Jones, Catchings, Lyttle, Parker, in order - and Sanford is way ahead of the pack.
- Lindsey Harding: Like Sanford, Harding has not even been an above average player in the last four years according to PER. But what's interesting about Harding as a point guard - in a league in which Sue Bird and Lindsay Whalen exist - is that Harding is 7th in the league offensively, the top point guard and sandwiched between Diana Taurasi and former teammate Seimone Augustus. But this is where that bolded and italicized "probably" from above comes in - it's almost undeniable that Harding had a huge impact offensively on both the Dream and Washington Mystics is the past three years. More than all but six players in the WNBA? There's room for debate there, but it's not entirely a fluke either.
Taj McWilliams-Franklin: Again, "Mama Taj" will probably get different responses from different fan bases. But it's not uncommon for people to suggest that she was the key component of a very talented Lynx team this season and RAPM bears out a similar story over the past four years.
There are still a few problems with RAPM, described as follows by Daniel M:
...one major issue is that because all players are regressed toward league mean, players with few minutes are considered average, and to rate out really badly, a player must have had a bunch of minutes to verify that the player was indeed that bad...the regression of rarely-used players to league mean is a big one, the lack of aging adjustment for multi-year ratings another, and not weighting multi-year ratings towards recency for best predictive power a third.
So coming full circle, what about Augustus?
I looked at the top 10 above, but Augustus checks in at 12th in the league over the last four years, just behind Maya Moore (who has obviously only played one year) and followed by Diana Taurasi, Lauren Jackson (injuries), Chelsea Newton (limited minutes) and Deanna Nolan (left the league after 2009). Given all the caveats needed for just the top 15, it was simpler to leave those for further discussion later.
Ultimately, the lesson is that this is just one metric that might be moving closer to the statistical "Holy Grail" but isn't quite there yet.
In fact, the existence of RAPM doesn't even preclude the use of the "lesser" forms - OCPM is great for individual games, NPM is fine during the course of a season due to its relative simplicity, and RAPM is great over the long-term. The key is just understanding what each one does and does not tell you.
The way I like to think of plus/minus is as a complement to productivity-oriented numbers, to address the probably factor: if a player ranks high in both PER - or your favorite metric - and RAPM, they're almost certainly pretty good; if player ranks low in both, they're almost certainly not destined for the All-Star Game. RAPM just gives you a bit more powerful tool to use.
So as for sorting out all the players in between? That's why we continue to watch the games.
What Does Plus/minus Mean In Basketball? | LIVESTRONG.COM
Plus/Minus in the N.B.A., a Plus or Minus? - NYTimes.com
Adjusted Plus-Minus: An Idea Whose Time Has Come
The New York Times - Keeping Score: A Statistical Holy Grail: The Search for the Winner Within
Those stat guys are at it again, and now the Moneyball - 10.24.05 - SI Vault
WNBA Player Efficiency Ratings (PER) for 2011 - Swish Appeal