As my next Sunday article (once again late) I thought I’d go over what I find is a frustrating statistic for rating players mostly because of how the authors arrogantly discuss it like it’s bullet-proof.

In the advanced statistical realm of the NBA there are two main flavors of analysis: a microscopic view using every box score number available to estimate the value of each player, and a macroscopic one that uses the result (i.e. the score of the game) to back calculate the value of each player or line-up. The mainstream media incorrectly assumes “statheads” only care about what’s in the box score and thus miss all that doesn’t show up as a stat like setting a pick, fouling at just the right time to stop a lay-up, or most of the defensive end of the court like being able to guard Dwight Howard one-on-one and denying him deep post position. This is completely off base, although some people, like David Berri of Wins Produced fame, are guilty of this offense.

Macroscopic models, mainly adjusted +/- types that use the score of the game and combinations of players to attach an effect in points in the game to each player, unfortunately are limited in their precision. Even with an 82 game schedule and 48 minutes a game, there is not enough information to diminish the margin of error from roughly five points. For reference, these five points are over the course of a typical game, and it would separate the previous champions the Dallas Mavericks from teams that couldn’t even get into the playoffs. Microscopic models, however, have no problem with precision and with certainty can tell the reader exactly what the result is virtually every time. These models are shortsighted, obviously, by only reading the box score; their knowledge of defense is stealing the ball, blocking it, fouling, or rebounding. What they lack is accuracy. The macroscopic model can show you the entire galaxy but blurred and hazy; the microscopic will focus on a few spiral arms of the galaxy with great detail and clarity. You can’t assume you know exactly what the rest of the galaxy looks like from only a small section.

Wins Produced, an invention from economics professor David Berri, is the kind that focuses on only a few points in space: it uses traditional box score stats to estimate how many wins each player contributes to the team. Unfortunately, the proponents of the stat are so arrogant in the discussion and validation of their findings that they bring down the entire advanced statistical basketball community with them. Others like John Hollinger, of PER fame, will discuss the limitations of his model and admit its flaws. He doesn’t rate players solely by the stat, and with defense ignores it almost entirely. Wins Produced are discussed like it’s the only statistic a person needs when evaluating the league, except for something like age.

Despite the confidence of the authors, Wins Produced isn’t a super advanced metric; it’s really quite simple. The basis to the entire model is the possession. If offensive and defensive efficiency directly relate to wins, a reasonable assumption and one tested statistically, then one then be able to find a proxy for wins based on a model that calculates the aforementioned efficiencies. Rebounds add possessions, and thus add to wins, while turnovers are lost possessions, and thus you lose wins. This is entirely fine on a team level, but there are huge assumptions glossed over when pinning them on individual players.

Although most stats are directly tied to wins through the possessions, some like blocks and assists had their values found through analysis. However, the average value to applied to each player; this means something like a block Is the same value for someone like Dwight Howard, who swats a shot out of bounds and lets the other team have the possession, and Tim Duncan, who tries to tip the ball to his teammate to gain one. Assists have the same problem – an Andre Miller assist is typically more valuable than a Lebron James one because of how often Miller feeds a teammate for a dunk and how often James gets an assist for a long-jumper. Nonetheless, there are even more problems with the statistics directly tied to a possession.

Take, for example, the rebound. Think of the retired Bruce Bowen, a respected defender, guarding an offensively potent wing player during a possession. If he does everything correctly – denying good position after running up the court, move his feet to prevent a drive to the lane, stay grounded during shot fakes, force him into a tough long-jumper – then he still will not receive any credit according to Wins Produced; instead someone like Rasho Nesterovic would grab the rebound and be rewarded with an uptick in his statistics. There is obviously something very wrong with that model.

For defense, there is another factor in Wins Produced that I have not yet discussed. It’s the team defense factor. To incorporate the difficult task of assessing defense beyond a surface level they simply add an almost negligible number based solely on the team of the player; it’s essentially a team defensive efficiency statistic. That’s it. Carlos Boozer got the same benefit as Luol Deng, and Jason Richardson the same as Dwight Howard. There are also positional adjustments to mask the poor numbers for guards versus centers and power forwards, but that’s not egregious compared to the next topic.

For a validation of their model, the authors behind the formula regularly illustrate a table showing Wins Produced for each team and their actual results during the season. The differences are typically one to four wins. However, using those team results to validate the model is very problematic. With how Wins Produced was built, they are essentially summing individual offensive and defensive efficiency statistics and comparing them to team wins, which are highly correlated to efficiency, but those statistics like rebounds are directly tied to the team and not the player. This does not mean they found the magical formula to explain player value; all they did was say wins are explained by efficiency, which was divided to each player based on simple box score statistics. To put it in simple terms, they defined a word using the same word in a slightly different form – i.e. assuming, when one makes assumptions. That is not a useful definition.

Another large assumption David Berri and company make in defending Wins Produced is saying that since performance measured in box score stats or Wins Produced doesn’t change over time and teammates have little effect on how they affect other players’ stats, box score stats comprehensively reflect a player’s worth. The logic behind that argument is highly flawed. They essentially avoided the argument and instead discussed how player stats don’t change much with time. The consistency of box score statistics to each other has nothing to do with the power of those statistics. If players are consistent in the NBA, then it’s reasonable to assume box score stats won’t change much but it’s also reasonable to assume the stuff not found in the box score also won’t differ over time. I don’t know how that argument actually addresses the limitations of the box score, but that’s a real argument they use on their website.

As an illustration on the limitations of Wins Produced, I created a chart below showing players’ Wins Produced versus a regularized adjusted +/- score from stats-for-the-nba.com from the 2010-2011 season. The red line indicates a regression line, which from a linear regression shows the correlation between the two stats. Wins Produced explains only 19% of the variation in adjusted +/-, which means they greatly disagree on the value of players. This could be explained by the inherent limitations of +/-, which can’t give precise results; but I also think it shows they disagree fundamentally on most players in the league to some significant degree. If Wins Produced is a truly faultless method like the authors pretend in their articles (never mentioning that *gasp* the formula could have flaws), then it should more highly correlate to something as comprehensive as +/-.

I also included interesting outliers. Dirk Nowitzki, strangely enough, was the greatest outlier based on standardized residuals. I suspect Wins Produced didn’t like his low rebounding and shotblocking numbers, but Dallas didn’t seem to mind when they won the championship. Nick Collison is an intriguing outlier: either he’s an extremely underrated NBA player on par with the superstars in the league like Chris Paul or Wade, or +/- has a problem in determining his correct value. Above the red regression line you mainly have players known for fundamental defense like Jason Collins and Andrew Bogut or offensive synergy like Nash and Ginobili; below you guys known for putting up stats but forgoing man-to-man defense or other aspects of the game not tabulated in a simple stat like Kevin Love and Kevin Martin.

The main issue with Wins Produced is how the authors and supporters discuss the model like it’s flawless. It’s an interesting method that can compete with Win Shares or PER, but to claim that it can explain wins with a 95% accuracy is ridiculous. All they’re doing is dressing up a team’s efficiency numbers and allotting each piece to a player, and then summing those results to show how close they’re correlated to wins. Well, of course they are; everyone knows a team’s points scored and allowed per possession can explain wins. If you have the audacity to ask them how defense can be explained by steals, blocks and rebounds, you’ll get a response about how your little brain can’t comprehend a counterintuitive result. My advice is to view Wins Produced as an interesting summation of box score stats, and not to sway your complete view of a player. Using Wins Produced as your only method in evaluation a player is like zooming in on one feature of an animal. Maybe that one part can lead you to a conclusion about the rest of the animal, but you could likely become the blind man holding the elephant’s trunk and believe it’s a snake instead of something much more powerful.

Your issue seems to be that evaluation based on boxscores is flawed, but you don’t really explain how.

> but to claim that it can explain wins with a 95% accuracy is ridiculous

This seems like one big strawman. Has that claim been made? I think they state it is correlated with wins to a 95% level.

> With how Wins Produced was built, they are essentially summing individual offensive and defensive efficiency statistics and comparing them to team wins, which are highly correlated to efficiency, but those statistics like rebounds are directly tied to the team and not the player.

That’s not true. Berri ran a regression on each boxscore item to see what the realised value of each metric was. http://wagesofwins.com/faq/ has good info, and regression is a tool that is widely used in economics, and pretty well understood.

> Using Wins Produced as your only method in evaluation a player is like zooming in on one feature of an animal.

I don’t get this either. Surely, a debate about value STARTS with numbers, and then tries to explain them. To do anything else is to have no frame of reference. Every argument about evaluation ever includes a number of some sort at some point, so everyone agrees numbers matter. The question is which numbers, how and when and of what relative value.

Also, you know there is a defensive adjustment in Wins produced, yes? Not a terribly good one, but there is one.

But beyond that, the real nugget from Wins Produced is this – Basketball is a game of Marginal value, e.g. the extra value created at each position is what helps a team win. I think WP does a poor job of measuring marginal value for defense, no question, but marginal value is definitely the key to basketball.

From their FAQ: “Wins Produced, though, explains 95% of wins.”

No, that’s not true. The regression was ran on team offensive and defensive efficiency. Then they linked the team stats (possessions employed, possessions gains, etc.) to individual box score stats like rebounds. They did run a regression on block shots and a couple others, however. But Wins Produced is built on estimating the value of individual stats through the team efficiency stats.

And believe me, I understand regression. I’ve done a lot myself in more complicated models. It looks like they did a simple two variable linear regression with offensive and defensive efficiency for the main model. Regression also shows correlation and not causation, and one has to be careful drawing conclusions.

I don’t think you understood what I was saying about the zooming in line. I’m saying box score stats only tell one side of the game, and only focusing on those without consideration of anything else is flawed. There are also different types of numbers. Remember the micro/macro discussion? You can see how well a team does with certain players, which is different than looking at how many shots someone makes and misses.

I already mentioned the defensive adjustment in the post. It’s very tiny and Berri has admitted it’s basically negligible. It’s applied to each person on the team evenly, so that someone like Dwight Howard gets the same boost as Ryan Anderson.

Why don’t other metrics work well with marginal value? I like how Wins Produced was formed, but I don’t like how it’s discussed by its proponents. They only use that stats to assess players and disparage other people’s work regularly. They also have a recent piece about the audacity of other people saying it doesn’t explain defense well, which you admit it doesn’t, and they say that in fact it does.

Here’s a point I want to reiterate again: they’re making a large assumption in jumping from team stats (efficiency numbers that explain wins) to individual stats. The rebound, for example, is gained by more than a player grabbing the ball after a missed shot. A defender had to force the opponent into that missed shot, but that defender receives no credit under the Wins Produced model. When you’re looking at team stats that’s fine because the players are a single unit. The authors also like to feature how Wins Produced adds up to the team’s win totals. Of course it does! The model is built from team offensive/defensive efficiency, and every NBA stathead knows how well that tracks with wins, but that doesn’t mean it explains player value.

Sorry for the delayed reply. I moved to a new site:

http://ascreamingcomesacrossthecourt.blogspot.com/2012/01/how-wins-produced-fails-in-being.html

“If Wins Produced is a truly faultless method…then it should more highly correlate to something as comprehensive as +/-.”

I don’t understand this logic. So what your saying that you don’t believe in Wins Produced, because it challenges your (and +/-’s) beliefs? This seems like the Illusion Of Validity to me.

You also talked about perimeter “stoppers”, like Bruce Bowen: Don’t you think it helped that the Spurs had Tim Duncan and were coached by Greg Poppovich? Defense is a team activity that can be determined by statistics. The only players I would argue impact their team’s defense are Centers, and their contribution can be quantified (blocks, steals, rebounds).

Wins Produced measures the most important things a player does. Gaining Possessions, and efficiently using Possessions. According to a players defensive +/-, how do we know that they “stopped” player X. How do we know that player X isn’t just Kobe Bryant (43% from the field)? If +/- is so effective, why is it extremely inconsistent? “Great” is a relative term, if anyone can be great in any given season (according to +/-), then is no one truly “Great”?

I have no problem with your opinion. I just suggest that you use evidence and logic to back it up.

You know, you try to pick apart my argument, but then you say things like the contribution of defense can be measured by blocks, steals, and rebounds. Really? I’m sorry, but that’s silly. Do you really think Ibaka last season was better than Garnett and Dwight Howard?

Wins Produced only measures what’s in the box score! And it’s arbitrary what’s in the box score! They’re just assigning box score stats to possessions, which is then used (on a TEAM level: point differential, essentially) to calculate win totals. That’s it.

Also, Wins Produced’s values are relatively stable because the events are discrete (shots, blocks, etc.) while adjusted +/- models are regression models with natural statistical variation.

How do you know if you stopped player X? Well, how do you know if a player made a basket because he’s skilled or he got lucky because another player is really good at finding others open? How do you know from basic box score stats that a block isn’t from a player stupidly leaving his man and going for every shot attempt, spamming for blocks, rather than blocking lay-ups and protecting the rim?