clock menu more-arrow no yes mobile

Filed under:

Ranking the Lynx, Part II

pilight's great post on ranking the Minnesota Lynx definitely caught my eye.  As someone who never got a chance to see the great teams of the Houston Comets or the Los Angeles Sparks, I've always been interested in which teams in the WNBA were the best ever.

My favorite method - every stathead had his favorite metric - is the Noll-Scully Measure.  I'm sure I must have written about Noll-Scully Measures a thousand times, but they are one method to measure the competitiveness of a league in any given year.  (Oh yeah.  Here it is.)   The ideas behind the Noll-Scully measures - simplied - are as follows:

1.  To figure out how competitive a league is, imagine a perfectly balanced league - a league where every team is exactly the same in terms of talent.  You can think of each team as one team, cloned several times.  Random chance will indicate that these teams shall not all finish at .500, for the same reason that you don't get twenty-five heads and twenty-five tails every single time you flip a coin fifty times.  Sometimes you get twenty eight heads, sometimes twenty four, once in a while you get thirty and sometimes even twenty-five.  It's very rare to get forty heads.  Likewise for teams

2.  The coin flip distribution is what's called a binomial distribution.  It has a very predictable distribution or "scatter".

3.  We then compare the scatter of the league in question to the scatter of wins in our "perfectly competitive league".  We use a statistical concept called "standard deviation" to measure scatter.  (This function is STDEV in an Excel spreadsheet)

Noll-Scully Measure = (standard deviation of wins in test league)/(standard deviation of wins in perfectly competitive league)

= 2 x (standard deviation of wins in test league)/(square root of number of games for a team in regular season)
= 2 x (standard deviation of wins for a given WNBA season)/(the square root of 34)

This works for any league - WNBA, NBA (change the denominator to the square root of 82), the NFL (change the denominator to the square root of 16), etc.

So how do we evaluate the results of this formula?  Assume the league was perfectly competitive. Then the "test league" is the "perfectly competitive league", the numerator and the denominator of the equation are equal and the result is equal to "one".  We'll use two decimals and write it as 1.00.  The rule then becomes, "the closer your Noll-Scully measure is to 1.00, the more competitive the league."

I used data from 1996 to 2010 and calculated Noll-Scully measures with some caveats.  For typical sports leagues:

NFL:  1.58
NHL:  1.69
MLB:  1.84
NBA:  2.77

Where does the WNBA fit in?  Using a weighted mean of the last 15 seasons, we get

WNBA: 1.81

The WNBA is more competitive by a large measure than the NBA, which might be the least competitive major sports league.  Generally, the NBA is split between very good teams and very bad ones.   There are many factors which could affect the Noll-Scully measure:  The role of luck during individual games.  How strongly the gain or loss of a single player affects a team, or how long players remain in the league.

Each season has its own Noll-Scully measure.  Here are the 15 years of Noll-Scully measures for the WNBA:


1997 1.40
1998 2.60
1999 1.62
2000 2.22
2001 1.96
2002 1.64
2003 1.65
2004 1.26
2005 2.00
2006 2.12
2007 1.53
2008 1.84
2009 1.13
2010 1.98
2011 2.30

Note the value associated with the 2011 season:  2.30.  This season - according to Noll-Scully - has been the second-least competitive in WNBA history, second only to the 1998 season where the Houston Comets galloped to the WNBA championship on the strength of a 27-3 season record. 

Why is that?  Well, remember Noll-Scully has to do with how much "scatter" there is in the league's wins.  The more scatter, the higher the measure.  The 2011 season has scatter on both ends.  True, the Minnesota Lynx's 27-7 looks very impressive - but you have to remember that there were very bad teams on the other side of the scale.  Tulsa had one of the worst records in WNBA history at 3-31.  Washington finished at 6-28.  Of the ten teams not named Tulsa or Washington, five of them finished with 20 wins.  When Candace Parker got hurt, the race for the playoffs was basically over in the West and the only question was in which order would the four sure-to-qualify Western Conference teams finish?  (The Eastern Conference had slightly more drama.)  There wasn't much drama in the 2011 season until the playoffs came.

Okay, so what does all of the above mean for individual teams?  Generally, a great team should be not only "above average" but it should be "far away from average".   (The same holds true for horrible teams, just in the opposite direction.)  Average is defined as having a .500 winning percentage.

Instead of taking the standard deviation of wins, we'll take the standard deviation of winning percentage of all teams.  Therefore, a team's strength against its fellow teams could be ranked:

Team strength = (winning percentage - .500)/(standard deviation of winning percentage for given season)

In short, a team's winning percentage is compared - in some way - to the scatter of winning percentage of the league.  (This defines what "far away" means in "far away from average.")

We can perform this calculation for all WNBA teams in all seasons of the league.  Let's take a look at the lowest values - the historically worst teams in history.



1 2008 Atlanta Dream -2.424
2 2004 San Antonio Silver Stars -2.177
3 2011 Tulsa Shock -2.085
4 1999 Cleveland Rockers -1.96
5 2006 Chicago Sky -1.939
6 2010 Tulsa Shock -1.907
7 1997 Utah Starzz -1.89
8 2005 Charlotte Sting -1.886
9 2003 Phoenix Mercury -1.867
10 2005 San Antonio Silver Stars -1.715
11 2000 Seattle Storm -1.695
12 1998 Washington Mystics -1.682
13 2003 Washington Mystics -1.66
14 2011 Washington Mystics -1.638
15 2007 Minnesota Lynx -1.569
16 2007 Los Angeles Sparks -1.569
17 2009 Sacramento Monarchs -1.519
18 2002 Detroit Shock -1.511

Egad.  Those are some stinkers.  The last two Tulsa Shock teams are among the ten worst WNBA teams in history by this measure.  But which are the best teams in WNBA history?



1 2001 Los Angeles Sparks 2.185
2 1999 Houston Comets 2.1775
3 2004 Los Angeles Sparks 2.1773
4 2000 Los Angeles Sparks 2.034
5 2002 Los Angeles Sparks 1.942
6 2010 Seattle Storm 1.907
7 2000 Houston Comets 1.864
8 2009 Phoenix Mercury 1.823
9 2002 Houston Comets 1.727
10 1998 Houston Comets 1.682
11 2003 Detroit Shock 1.66
12 2007 Detroit Shock 1.569
13 2005 Connecticut Sun 1.543
14 2009 Indiana Fever 1.519

According to the above, the 2001-04 Los Angeles Sparks could have given the 1997-2000 Houston Comets a run for their money in greatness. 

But where is the 2011 Minnesota Lynx?  They would rank #15 on this "Best Teams" list.  The team below them?  The 2006 Connecticut Sun, which finished one game short of the 2011 Lynx at 26-8.  So how come that team - which incidentally also had Taj McWilliams-Franklin and Lindsay Whalen - isn't being touted as one of the greatest teams of all time?  Probably because it didn't acheive playoff greatness.  This was the last gasp of a team that had lost in the WNBA Finals two previous years and got beaten by the Detroit Shock in the Eastern Conference Finals in 2006.

Think about it.  What if the opponent of the 2011 Lynx were not the 2011 Atlanta Dream - but the 2009 Indiana Fever, which took Phoenix to a full five games in the WNBA Finals?  And if you think that the 2009 Fever would have given the 2011 Lynx a much harder time, then what about the 2009 Mercury which had not only Diana Taurasi, but Cappie Pondexter, each of whom could explode like an Angle McCoughtry can?

What about the 2001 Los Angeles Sparks?  They finished with a better record by percentage than the 2011 Lynx at 28-4.  They won their conference by eight games, whereas the Lynx only won theirs by six.  They had Lisa Leslie averaging 19.5 points and 9.6 rebounds a game, four players averaging in double-figures and the ever-nasty Tot Byears to come off the bench.  Are we really going to claim that the 2011 Lynx could have beaten LLL (the league MVP that year), D-Nasty, Tamecka, Mwadi and Tot?  Sure, the Lynx had Cheryl Reeve as coach but the 2001 Sparks had Michael Cooper, who won more than one ring with this club.

So is a measure based on the standard deviation of winning percentage the way to measure greatness?  is all of the above the be-all and end-all argument ?  No, it is not.  You don't want to use statistics the same way that a drunk uses a lamppost - "for support, rather than for illumination".  Stats should shine light on dark places, even if imperfectly, but should never be the sole support of an argument. 

For example, Noll-Scully uses regular season wins only to count greatness, but doesn't take playoff greatness into account at all!  The #4 team - the 2000 Los Angeles Sparks - didn't win a title, and the title winner in 2000 - Houston - is actually ranked below them in greatness, at #7!
Furthermore, the above numbers cast a positive light on some overlooked teams.  Mechelle Voepel and Michelle Smith rank the 2000 Comets as 10th out of the 14 championship teams.  Readers - including pilight - felt that the 2000 Comets were not ranked high enough.  The Kevin Pelton and John Hollinger ranking methods used by pilight give the last of the great Comets teams their due.

So does any of the above answer the question?  No.  But it might start an argument.  And in some WNBA circles, a good argument is just as good as a decisively-answered question.