Ranking the Champions and Finals

Jonathan Daniel

October is follow-up month. That means we're going back to some of our previous number crunching exercises and updating them for what's happened since.

Today we're looking at a couple of posts from 2011. One involved ranking the finals for their competitiveness and attraction. The other was about ranking the WNBA championship teams.

First let's look at the finals. Go back to the 2011 post if you need a refresher on the system.



The 2012 finals did not put up a great score: not many close games and only one road win makes for mediocre numbers. The conference finals were a mixed bag, scoring 7 (West) and -7 (East). The first round Minnesota-Seattle series scored at 14, making it the best series of the year. Expanding the finals to 5 games may be dragging the scores down a bit. It's hard to argue that this series is really less compelling than the 2002 mismatch ranked immediately above it.

The 2013 finals rate as the worst ever. Not surprising, as the powerhouse Lynx again crushed the shorthanded Dream. The conference finals were just as bad, scoring at -13 (East) and -14 (West). The only playoff series that scored a positive number that year was the first rounder between Los Angeles and Phoenix, which rated an 8.

The 2014 finals scored surprisingly well, considering how noncompetitive it seemed much of the time. It's still close to the bottom, but not as close as you might expect. We did get one close game, and Phoenix's great record buoyed the rating. The conference finals came in at 8 (West) and -3 (East), which is pretty good. The highest rated series of the year was the Atlanta-Chicago first rounder, which scored a 10.

As for ranking the champions, you'll recall that we had two methods of doing so. First let's look at the Pelton method...



The "grade inflation" from the 16 team league days will be all but impossible to overcome. The 2012 Fever and 2014 Mercury both do well here. The 2013 Lynx, not so much. Part of it is the system, which tends to favor beating a team with good stats over actually being a team with good stats. Put another way, the variation in playoff opponent scoring differential is much higher than the variation in the team scoring differential of championship teams, and it's far less controllable by the team you're trying to rank. That means the quality of the playoff opponents turns out to be a greater determinant of the ranking than the quality of the team itself, which makes this a deeply flawed method.

What about the Hollinger method?



We have a new leader!

The 2014 Mercury at the top and the 2012 Fever in the low middle seems much more reasonable than having them 5th and 6th. I can't argue much with the Merc being the best ever. It would be fascinating to see how Coop & Swoopes and the rest of The Comets of 1998-2000 would attack Griner.

Next for follow-up month, we'll look at weighted average ages.