I explained this in an email to someone earlier this year, so I'll just throw it out here: I think team-statistics are far more powerful than individual statistics for a number of reasons, but chief among them is the fact that at the unit level it's much easier to approximate the strengths and weaknesses of a team's interactions whereas individual statistics isolate one player's performance and leave a ton out.
Individual statistics can give you a rough hierarchy of players (though not giving due credit for defensive prowess); unit level statistics can not only provide a pretty good picture of the league's hierarchy, but also what specifically makes a team good or bad in one snapshot.
In the past, I've described two ways of evaluating the quality of teams statistically:
- Model-Estimated Value by David Sparks (MEV)
- Team Factors Rating (based on a weighted Four Factors formula)
Team Factors Rating
The metric I've referred to as "team factors" is a weighted formula of Dean Oliver's Four Factors. As a single metric, it gives you a tempo-free assessment of how well a team is playing relative to their opponents in terms of the key elements of winning basketball: shooting efficiency, turnover percentage, offensive rebounding, and free throw rate.
But also interesting to use at the team level is Model-Estimated Value, primarily because we can much more easily figure out how the sum total of individual production relates to team-level production (which is very useful for determining things like MVP awards using the "percent of valuable contributions" a player has made to their team).
There are two things to be aware of with MEV at the unit level though. First, it's not tempo free, meaning a fast paced team will have a higher MEV. For example, Phoenix has had the highest MEV every year I've kept track because they play faster, which means they more opportunity to produce positive value. Second, is that it's not at all explanatory and really wasn't intended to be - it reflects the reality that the quality of a team's overall statistical performance (e.g. assists, steals, etc.) might not be reflected in the final score.
But what it's very useful for is figuring out how productive a team is relative to opponents, given the pace they're playing at. So in situations where "the game was closer than the final score suggests" or a team visibly outplays an opponent but never gains separation on the scoreboard, the difference between a team and opponent's MEV is a pretty interesting and useful indicator, particularly if you want some way of knowing how well a team is playing even if performance isn't resulting in wins.
Or in perhaps more familiar terms, "...even when activity (MEV) isn't translating into achievement of things that do win games (team factors)."
Estimating activity and achievement
There's no argument here that all of this is more predictive than defensive/offensive rating or point differential. Instead it's just that they provide a lens to probe more deeply into a team's dynamics at the unit level that connect directly to the individual level.
And for power rankings that's particularly useful because it's not only about whether a team is winning but how well they're playing and sometimes whether a fatal flaw is causing losses. More importantly given the current circumstances, it makes it easy to see specific trends that are influencing performance currently that add to how we look at season-level performance.
Click here for a longer description of this way of looking at things.