Let’s talk about stats vs traditionalists

The advanced stats vs conventional hockey people fight is probably getting old for anyone watching the argument from the outside. Nevertheless, it rages on, flaring up whenever a new school person flagrantly denies conventional wisdom or an old school person balks at nerds analyzing the game with spreadsheets.

Many might be surprised to find out that the two schools of thought actually agree more than they disagree. The problem is, there are fundamental areas of friction between the two sides that may never be resolved. Let’s talk about these issues, from the view of a new school outsider with a background in psychology (i.e.; me).

So sum up what you’re talking about here. Why can’t we get past this annoying sniping between the two sides?

The problem is there are intractable differences between traditional hockey thought and newer, evidence-based evaluation. Some things that are considered irreducible primary truths or axioms by traditionalists don’t seem to have basis. Or at least in the current environment of the NHL when it comes to, you know, things that help you win games. Culture can do this – inculcate and insulate beliefs that are widely held and wildly wrong all at once. That said, culture can also bury deep seated truths in long-held beliefs and rituals. The problem is, it’s difficult to separate the truth from myth, especially when some beliefs become sacrosanct or considered too “obvious” to be challenged.

So while there is probably a lot of overlap (coaches actually tend to make decisions that are largely in agreement with corsi, for example), some of the key friction points are almost irreconcilable between the two sides.

So what are these “friction points”?

We’ll start with an area almost everyone struggles with: randomness. Traditional hockey guys (and almost everyone else) don’t think probabilistically and don’t understand randomness. Like, at all. The traditional mindset is very much “100% causal” in that what happens is considered inevitable. Everyone has full control over their results.

As a result, they are endlessly chasing results and tend to overweight a player’s previous accomplishments, even if they were mostly team based. They also seem to fall prey to the fundamental attribution error, a bias that supposes personal factors are the primary determining factors in all outcomes. Would the Canadiens consider Andrew Shaw “a winner who hates to lose” if he had played for Oilers rather than Hawks, for instance? Probably not, but the player didn’t really have control over what team he played for and his perception as a winner is almost wholly dependent on that variable.

How do you come to this conclusion?

While corsi gets the most play in the media, one of the biggest leaps forward in hockey analysis was actually PDO and the subsequent acknowledgement of the sway of randomness over results in the NHL. For years, analysts on the new school side of things have watched as narratives and stories about teams and players have been fabricated to explain the random swing of the percentages. Winners and leaders one year become losers and malcontents the next.

The culture of the league also bears explicit antipathy towards a luck/circumstances mindset in players. No one likes a guy who blames his coach, teammates, GM or luck for bad results. Similarly, no one wants to accept that success can be due to good fortune. Our brains prefer the story, which seems satisfactory and obvious in the aftermath.

What’s next?

Traditionalists frequently mistake what they like with what is good (i.e.; what is useful) and that dissuades them from looking for actual relationships between winning and factor X. See: blocking shots, being “tough” and hitting and how they don’t actually tend to correlate to winning.

Again, this is a common psychological bias. It’s hard to know how much a certain action or metric actually influences winning, so instead of answering that question, the brain substitutes something easier – and seeing a guy who is willing to sacrifice his body to block a shot, fight an opponent, or body check opponents is observationally obvious, easy to recall, and superficially commendable.

On the other hand, stats analysts tend to treat each assumption as a theory that requires verification and testing. And here’s the thing: the stuff that turns out to be the most counter-intuitive is what tends to get the most attention and interest – because it’s controversial, noteworthy and points to a potential market inefficiency. 

Thus you get the conflict of “this is how the game has always been played” versus “but it’s not right.” You have guys who haven’t played at the highest levels telling ex-players, coaches and longtime fans what they have always believed as not only true but plainly obvious. 

Why do you keep talking about psychology?

Because this is, at its base, a conflict about ways of thinking about the game and not actually the particular facts and squabbles that erupt. One of the problems with eschewing careful stats work and evidence-based analysis is that you can be prone to the narrative fallacy.

That makes people prone to reductive, “story”-based thinking when it comes to attributing success or failure to certain variables. So, for instance, Aaron Ward’s take that Carolina’s unique culture was what led them to win a Cup in 2005-06.

That’s an anecdote, not proof of the role culture has in winning.

No one mentions the teams with great togetherness or culture that don’t win anything (and they certainly exist, because only one team wins the Cup every year). Moves that are made to improve morale, leadership, culture etc. are rapidly forgotten if nothing notable happens after them. Also, a good culture can often trail winning (rather than predict it) because it’s a lot easier to have a good dressing room when the team enjoys success. This is true of “confidence” as well. Disentangling these issues is a lot more complicated than looking at the standings and assuming one team, coach or player simply “wanted it more”. 

So culture doesn’t matter? That’s ridiculous.

No, I wouldn’t say culture doesn’t matter. Managing a room of people to ensure everyone pulls in the same direction is a challenge in every walk of life, so it’s likely a factor in hockey as well. The problem is, culture and similar fuzzy concepts (like leadership and heart) are often wielded as fudge factors or deus ex machina devices that explain away certain outcomes after the fact when it comes to this stuff. 

Here’s the problem from a stats/evidence perspective: culture and such is almost always offered as post hoc rationalization and not as a useful, predictive factor in advance. 

To understand the role and effect of culture on winning, it would take careful, systemic study of what aspects of teamwork and leadership actually lead to improved outcomes. This is possible, but NHL teams and traditionalists haven’t done this (as far as I can tell) – they tell stories about what happened and assign roles to actors as heroes, villains etc. in the events afterwards. This is prone to misattribution, scapegoating and over- or underweighting of things because no one can really be sure of what matters and what doesn’t. 

In short: culture no doubt matters and it might matter a lot in certain ways. But we don’t really know what, how or why or how to use it to predict success.

Is there anything else to this feud? How about the jock vs nerd divide?

Yeah, there’s another aspect to things like toughness, hitting, size, tough to play against – i.e.; the various factors that still tend to be relatively overweighted in the league. They are part of a particularly masculine “might is right” psychology that underpins all competitions of this nature.

Sports are a proxy for war, sociologically speaking, so hockey men tend to eschew things that make them seem “lesser” – small, weak, vulnerable to physical intimidation – and value things that make them seem dominant – size, strength, aggression, toughness. This bias operates in society in general. Taller men are more often chosen as leaders and tend to make more money (and get more dates).

Being big, tough and mean subconsciously grades a guy as a warrior. Being smaller, less aggressive, more “childlike” marks a guy as someone vulnerable and in need of protection in our lizard brains. That’s why small guys have to prove they can play hockey and big guys have to prove they can’t in the NHL.

So only the stats guys are “right”?

No. I’ve mostly represented that side of the debate because that’s where I come from and best understand. It’s also notable that hockey coaches and decision makers have been making decisions in the absence of “big data” for a majority of the league’s existence, so they created methods and norms that probably best suited what was, or rather has been, available.

Next, know that the advanced stats side isn’t a single, monolithic group (there are on-going battles amongst new school guys going on all the time, but those don’t tend to extend beyond the community), so what is “right” is often in contention on that side of aisle, too. Remember, also, that new school guys don’t have perfect data or knowledge either.

GMs, coaches and players are acting on different information and often under different sets of incentives and pressures. For example, minimizing the role of “luck” in personal and team accomplishments in hockey is probably an adaptive response – believing you are in the arbiter of own your fate instills an internal locus of control, which helps motivate individuals to try hard, strive for me, opt for self-improvement, etc. 

The stats side may also have to come to accept that the ideal team on paper is entirely impossible to build for practical reasons that include the human, messy, intangible side of the equation. Personalities, demands, desires, egos and expectations can hopelessly complicate things in the ground but be completely imperceptible from the air. 

As mentioned, “culture” tends to convey embedded truths and experience, though the challenge is separating dogma and the credulity of blind convention from behaviour and beliefs that are actually useful. This is noteworthy because what was once true for a people, tribe, culture etc. may not necessarily apply anymore because things inexorably change and evolve. This will also apply to the stats side of things as “corsi” and such become part of conventional wisdom – it’s easy to stop evaluating your assumptions when they are widely established as true.

The next phase of evolution for both sides would be to identify what is useful from the other side of the divide. The team (or analysts) that can effectively view the forest and the trees will be able to sidestep common errors, identify market inefficiencies and integrate empiricism (test and prove, test and prove) throughout the organization, from scouts and GMs to the coaches and players. 

Maybe at that point we won’t see these fights anymore. Not that fans, coaches and analysts won’t always have different impressions and opinions about the game, but maybe we won’t have to talk about the fundamental ways in which we approach and think about it.

Recently By Kent Wilson

4 Destinations for Dennis Wideman

Don’t Give Up on Micheal Ferland

Let’s Talk Troy Brouwer

How Much Better Will The Flames Goaltending Be?

  • beloch

    Perhaps another source of friction is that most people do not understand stats, but many think know more than they really they do and believe too much of what is written about them. Then they feel betrayed when those stats prove to be less than completely predictive. For example, I’ve seen people on this board who seem to think NHLE tells you exactly how well a single prospect will do when he enters the NHL, but are blissfully unaware of what a population-level statistic is.

    As you point out, there is a lot of randomness (and also emergent behaviour) in hockey. More than even most stat-fiends probably think. Also, while most hockey stats show correlations, they don’t necessarily show causation. Very little work has been done to show how much predictive power advanced stats have.

    Goalies are a great example of this. Many have said at some point, “Goalies are voodoo”, because virtually nobody can predict them with any degree of accuracy. Traditional guys can watch a goalie play and try to get a sense of his style and compare it to other goalies he’s seen. That might yield some clues, but this method is often wildly wrong. Advanced stats offer very little improvement over this method unfortunately. Goalies with fantastic stats one year can suck the next, and vice versa.

    Advanced stats are sometimes represented as rigorous and well established, but they are absolutely a blind, flailing work in progress. They already offer some advantages over the traditional approach, but they can also lead to big mistakes and disappointments if you put too much stock in them and ignore other evidence. It’s still very much a black art and not a science.

    Personally, I’m not too worried about this. If hockey was as easy to quantify with stats as baseball it would probably be a less interesting sport. Nobody has all the answers just yet and, honestly, I hope it stays that way for a long time. Watching statistical methods evolve and improve is a part of what makes hockey so much fun to watch right now. If Hockey stats had “had it’s Newton” by now, it would be boring.

  • Craftmatic4.0

    Advanced stats are like a weather report, they can be useful but it is not an exact science and the outcome is not always true! The Flames proved this two seasons ago!

  • Kevin R

    Nice Kent, how to stir the debate pot :-}

    The Analytics has made great strides but in my opinion, they are still just measuring performance. Not sure if the data is available yet but it would be quite interesting to see players with good corsi/underlying numbers & see a historical measurement of those numbers. Why would that vary from year to year, if it in fact does? Is a strong possession game learned or is it part of the players inherent makeup? Is Possession more dependent on systems a team implements & how does a team keep players performing consistently to standards that are expected to make the team successful?
    Can a player be taught to be a good Possession player? I think what we’ll find is that truly talented players are just that much better than the mainstream of players that make it to the NHL/AHL. Players like Gaudreau & McDavid are good possession players because they are super talented at puck handling & love to carry the puck. To me Analytic profiles should be used as tools to try & enhance performance on the not so talented players in the league.

  • TX Flame

    Many of the analytics guys are prone to the same fallacies. You mention how PDO was such a giant leap forward, yet it is often used very dogmatic ally to declare that Team X is “lucky” BECAUSE PDO!!! The 2014-2015 Montréal Canadiens are an excellent example with their 2nd highest in the league PDO. The analytics guy calls that “luck”. The traditionalist quite rightly calls it “Carey Price”. The only way to call the 2015-16 Canadiens’ 26th ranked PDO “bad luck” is to agree that losing your all universe goalie to injury for most of the season is indeed horrible luck, but you also have to admit that the difference in PDO can be attributed to the skill level of the guy in the Habs’ net last year, not just some mystical “regression to the mean”. PDO is not just pure luck. It also has to do with the skill levels of the goalies and the shooters on a given team.

    • piscera.infada

      There’s a bit of a difference between the PDO debate there was around the Flames in 2014/15, and the one you’re describing above. A high PDO based off of having (by far) the best goalie in the league is entirely quantifiable. As such, it’s not “luck” because we know the effect Carey Price has on the save-percentage side of the equation. In other words, the “Carey Price effect” should be inherently repeatable to within ‘x’ number of percentage points, and therefore the corresponding “bump” in PDO in expected.

      That contrasts a team that rides unpredictably high shooting percentages (like the Flames in 2014/15), simply because we know that outside of a few elite hockey players the vast majority are going to shoot around a similar percentage. That last part of the equation is just as quantifiable as the impact Carey Price has on save percentage. So, in the case of the Flames in 2014/15, we can say sure, perhaps Monahan and Gaudreau can ride high shooting percentages, but we can say with a fairly high degree of certainty that the other players (like Lance Bouma) who shot way above their career average either a) just figured it out basically overnight (which we know now wasn’t the case based on 2015/16), or b) were just riding a “hot” percentage.

      In short, labelling PDO effects as “luck” is lazy, but you also have to consider the broader audience these people are writing for. As Kent says in the article, there is a certain amount of randomness in sport (especially hockey). PDO tries to at least confirm which part of that is randomness (shot percentage), and which is repeatable (Carey Price’s save percentage). Saying PDO represents luck is simply an extreme oversimplification.

    • Have you considered that these are not contradictory opinions? The Habs are ridiculously lucky to have the league’s best goalie bail them out of poor play (~ 48CF% score adjusted 5v5 last two seasons). Tokarski, Condon, Scrivens et al. were terrible, but even with a league average goalie, they wouldn’t have made the playoffs. The hopes of the Habs were pinned mostly on Price being healthy. In other words, they bet their success on something that is absolutely random and unpredictable. In other words, luck. They had a healthy Price for the entire 2014-15 regular season and believed that would happen again in 15/16 without making major improvements to the team. This is much in the same vein as the Flames believing Lance Bouma could continue to shoot 15% or whatever and signing him to a bad contract that nearly immediately became a pain.

  • The Fall

    Both side tend to come off as smug assholes at times.

    Understanding both sides can be different and correct at the same time is a difficult concept for people. I appreciate thoughtful analysis from most of FN, and I appreciate the experience and of journalists like Darren Haynes.

  • The Fall

    Both sides tend to come off as smug assholes at times.

    It’s difficult to consider that both sides can be different and still be correct. I appreciate the analysis of most of FN, and I appreciate the experienced journalism of Darren Haynes.

  • DoubleDIon

    The problems of the traditionalists are obvious. I actually think analytics based evaluation is deeply flawed too. Corsi in particular is hugely overrated. A shot from Engelland from outside the blueline is not the same as a shot from the slot from Monahan. I can’t remember what the stat was called but it was basically a weighted metric for shot location with a reduced value on a locked attempt. It seemed far more relevant than corsi is. I liked what Gulutzan said on 960: A blocked shot means you have been unsuccessful, but you have to block shots when there are opportunities. It’s a balanced and reasonable position. They can’t score if you block it, but you are never attacking if all you do it block shots.

  • FeyWest

    It all comes down to not living off of stats OR intangibles as if they are the be all end all but using them to make observant and guided decisions. They may be more likely to be successful as such but there will never be a way to get anything 100% correct due to “randomness”.

    From a player perspective Advanced stats are definitely useful but again be aware that its a team game and there will be Player Y who is better than Player X even if Player X has better advanced stats. Same logic applies to the Traditional side, Player X may be bigger than Player Y but Player Y knows how to adapt to changing situations whereas Player X is not, thus it could be easily debated that Y > X.

    Great write up Kent and I’m definitely interested seeing everybody elses thoughts on this. Can certainly feel it being the dog days of summer though hahah.

  • Aadvarkian Abakeneezer

    I’m a numbers guy. I think stats are awesome and finding quantifiable data in athletic performance is the coolest thing ever. The hockey analytics community grinds my gears a bit, though because they feel the need to put an asterisk on everything that doesn’t fit into the mould.

    For example, the 14/15 Flames. The number of people I heard and still head disparaging the flames for having a high shooting percentage and bad possession stats is incredible. It’s as if it somehow takes away from making it to the second round. That idea that unless it was predictable it was a fluke and they didn’t deserve it (true or not) takes all the damn fun out of sports. It’s the worst.

    For the record, I don’t think it was a fluke. Or as much as people claim. There are a lot of stats we don’t or can’t track. We call them intangibles, but often they’re very tangible. Lots of sports science metrics hockey stats guys don’t( have the information to) touch, which contribute to everything from foot speed to PDO, to athlete durability. Then there are the real intangibles, which – ask any athlete – can be the difference between a championship performance and totally laying an egg. If nothing else that was a group of confident ass hockey players. If confidence was quantifiable, it would be the most import stat in sports. Just a thought.

    • SoCalFlamesFan

      Really good point. What happened to passing percentage like football. I would be interested if there is a speed difference based on average of one team over another.

      I have wondered if a slower line would inevitably have better possession because it takes them longer to get anywhere dangerous being more peripheral. just spitballing here.

      Often these stats are counted differently from location location like shots but that should average out a bit over time.

  • Joe Flames

    Hey Kent, I know it has often been shown that hitting does not directly correlate to winning, but has anyone ever crunched the numbers to see if hitting (or blocking shots, etc.) improves things that do correlate to winning? In other words, does hitting lead to better possession, which does lead to more wins??

    I agree with the opinion that there is middle ground between traditional strategies and advanced stats. Those of us who work with stats have to be careful to not be too dismissive of factors outside of those that are measurable. For example, it is annoying to listen to some baseball analysts who basically say people are stupid if they don’t agree with their (solely) stats-based opinions. As you said, advanced stats show correlation, not causation.

    Advanced stats make following hockey that much more enjoyable, especially at times like free agent shopping, giving us ways to compare players we have not watched play very often.

    • Disagree. I think “middle ground” is an absolutely good place to be if you know what that truly means. I feel a lot of analytics people refuse to acknowledge certain irrational factors within the game just as traditionalists refuse to acknowledge the deeper side of the game. Middle ground is probably the best place to be if you want to truly understand the game.

      Unfortunately, a lot of people who say they’re “middle ground” take that to mean stats are only useful when they verify pre-existing notions and everything else is an outlier. This happens with both hardcore analytics people and hardcore traditionalists

      • DestroDertell

        I completely disagree. This is what I said about the cult of balance in hockey communities at hfboards a little while ago, in a similar context (too lazy to make another wall):

        A lot of you insist we must seriously consider every sides, analytics are just one part of the equation and thinking otherwise makes you “arrogant”. That is nonsense and worse than only using observations while ignoring data altogether. Even if it is limited by not being capable of combining various events and ability together to get specific results/answers, intuition is a valuable tool you can improve in time by training it. It’s also able to lead you to adapt and be decisive at a faster rate to sudden changes than data, which needs a large sample size for anyone to draw conclusions out of. Better than “balance” which is completely useless.

        By themselves, datas and observations are too heterogenic to be mixed up and create a decision. Add “saw some ugly turnovers in defensive zone” to “-3.1 CA60Rel” and what you get is… “saw some ugly turnovers in defensive zone” and “-3.1 CA60Rel”. There’s a deadlock because datas and observation can not be measured against one another. It doesn’t matter how much datas is collected to support a certain viewpoint and how carefully someone viewed the subject in question, different opinions created from different ways of making one do not have an universal “importance value” weight you can place on an universal “importance value” weight scale.

        So what happened? The one trying to make his opinion, whatever he be a decision maker in a hockey organization or a just poster at HFBoards will stick to what he thought. Maybe he’s more of a traditionalist so he’ll go with what he saw with his own eyes, maybe he’s into analytics so he’ll go with the composite WOWY, xGoals or some other combination of stats. It’s far more likely he’ll go with the majority in the place where these discussions (forum, work, whatever) take place; someone with a clear preference between data and eye test already know what position they’ll take before considering the other side. Of course, the huge majority of people were making opinions about hockey long before they knew about Corsi, so the “middle ground” opinions are usually far away from the usual opinion in the fancy stats community.

        This is why this cult of balance is poisonous to online hockey discussions. If anyone dare to defend a much maligned player or strategy by using various fancy stats, expect nothing more than “advanced stats are in the infantile stage”, “advanced stats are just a tool”, “stats are great but misleading” clichés with zero explanation as to why the stats were misused or taken out of context and how much the “why” affects the numbers (which is just as important as the first part). Of course, the substance of the counter-arguments doesn’t matter when confort is already found in both being part of the majority and using the talking points that makes you come off as smart in it.

        TL;DR: “Balance” is not a credible position. It is a vehicle to appear smart while ignoring worthwhile datas for personal reasons. See the pro-Brouwer comments on this website for a textbook example.


        Post #8: Must ignore analytics when it comes to spending because balance. MASSIVELY up-voted.

  • ngthagg

    PDO is such a terrible stat, it makes me frustrated to see it placed front and center.

    The problems are numerous. The worst is that calculating PDO from SH% and SV% means LOSING information. If you asked two people to evaluate a player, and one person knows the PDO, while the other knew the SH% and SV%, the one who knows the PDO knows LESS than the other.

    There are other problems. When we add SH% and SV% together, we are implicitly assuming that a point of SH% is equal to a point of SV%. Maybe this is true, maybe it isn’t, but it certainly isn’t obviously true. (Has anyone looked into this? I’d love to read about it.)

    The basic predictive power of PDO is the idea of regression to the mean: PDO will tend to move towards 1. It’s a seductive idea, largely because it seems to be supported by the numbers. (Here’s a good example: http://www.arcticicehockey.com/2011/12/20/2648333/pdo-regression-to-the-mean-or-why-you-should-ignore-shooting) But the most likely alternative explanation, that teams, and some players, are better or worse than others, would still see the same kind of movement. A team with a PDO one season of 1.03 is still going to see a decline even if they have above average SH% and SV% that in the long run would give them a PDO of 1.01.

    And the idea that some players are better or worse than others is one that is inarguable. Does anyone think that if the Flames had continued with the same goaltenders as last season, their SH% would have risen to bring them back to a PDO of 1? The idea is preposterous. PDO regression is a blunt instrument that only allows us to make accurate predictions on the outliers.

    Understanding the role of natural variation (or luck) in hockey is important, but PDO is exactly the wrong way to go about it. The sooner it disappears from advanced stats discussions, the better.

    • I feel you have a fundamental misunderstanding of the stat, because it is absolutely not a predictive stat.

      Think of PDO like an engine warning light. When that light flashes, it does not mean the engine is broken, but that it could if you continue to ignore it. It can never tell you when your engine is going to break, only that it will at some point.

      PDO can tell us whether an average player is unsustainably under/overperforming. It is not zero-sum (ie, 90SV% does not guarantee 10SH%), nor does it always guarantee regression to 1 (teams with good/bad goaltending). Teams with high PDOs may need to change something, because high PDOs are almost never sustainable. Teams with low PDOs may not need to change anything, because those are also never sustainable. PDO, in combination with other stats, can help explain why your team is winning or losing.

      There is absolutely no telling when a PDO will rise/drop or even if it will rise/drop. High/low PDOs almost certainly affect how we perceive teams/players. That’s why Bob Hartley has a Jack Adams and a pink slip within the span of 365 days. Go back through the 14/15 season FN posts to see Lambert yellin’ at folks about the Flames’ inevitable regression and then watching as it doesn’t happen week after week. fun times!

      Here’s the best place you can go to read about it more in depth:


    • ChinookArchYYC

      To add to what christian tiberi said. PDO should not be the end point when someone decides to dig deeper into player or team results, it’s just the beginning. PDO like any advanced stat cannot be looked at in isolation, it has to be considered within a basket of other stats to get a better picture of what MAY have happened.

      The 2014 Flames are great example. The Top 4 player PDO results came from Russell 104.4; Bouma 103.4; Hudler 103.1 and Colborne 103.1. Flames fans would/should immediately question 3 of the 4 results. Which is to say only Hudler would be expected to lead the team’s PDO results over an entire season, out of the players mentioned. Looking at SV% and SH% is a start, but you’d expect either stat will be abnormally high. In the Flames case, we know that’s SH% already. Looking a Corsi Rel and you immediately see that all the players, except for Hudler had a negative possession impact on their teammates. So, they we’re on the ice when the puck went in at a higher than expect/deserved rate.

      The point is that it’s lazy to look at a single advanced stat and make a snap decision on a team or player result. It takes a bit more work.

      PDO if used properly would/could reduce GM mistakes when they are negotiating with players. Given Bouma’s -9.4 Rel Coris result and 11 SH% in 2014/15, he was lucky to cash in with the contract he received.

      • ngthagg

        I can appreciate what you guys are saying, that PDO is useful as a warning light, or as a starting point for further analysis. But the fact that PDO has some use doesn’t change the fact that it’s a terrible stat. Plus/minus can also be used as a warning light, or as a starting point, but it is still a flawed stat, and our analysis of hockey is better without it.

        PDO is the same. We’d be better off without it. It’s not even that big of a switch, because we don’t lose anything by talking about SH% and SV% instead. It actually becomes easier to understand!

        PDO is not needed. It obscures other information. It doesn’t add or reveal any insight. It doesn’t belong.

        • I strongly urge you to go back and read the FTF piece I linked to. I feel you still have some fundamental problems with the stat.

          Your first problem is conflating PDO and plus/minus. Plus/minus was deemed terrible and pointless only because we later discovered it was the only stat purely driven by PDO. I cannot emphasize this enough. A stat that was once widely used and accepted was ripped apart by those who studied its relationship to PDO. In this case, PDO not only questioned traditional hockey thinking, it destroyed it.

          To this day, PDO is still very useful and you probably won’t find a statistician that tells you otherwise. Looking at SV% and SH% on their own has always been done and the conclusions have almost always been misguided and eventually proven wrong (he’s shooting better! the goalie is finally tuned in!). Combining the two stats has almost always shown which players we have inflated/deflated perceptions of, and which players are deserving of it.

          PDO is honestly one of the most important stats you can learn. That’s why it sticks around in analytics circles. If you have something better, I’d like to hear it.

          • ngthagg

            I had read the FTF piece before I made my first post. I’m not misunderstanding how PDO works.

            What I honestly don’t understand how SV% and SH% separately can be misleading, but added together provide clarity and insight. As I said previously, if you have two people evaluating a player, one who knows the PDO and another who knows the SV% and SH%, the second one knows more about the player.

            As you say, plus/minus was once widely used and accepted, but it was discovered to be flawed. PDO is also flawed (for different reasons, though). It deserves to be tossed as well.

          • A higher SH% on its own is often attributed to an increase in performance, a player turning a corner, or some coaching “system”. When people see a player shoot at a higher level, the narrative attached is that something has finally clicked and the player has taken the next step. This is almost never necessarily true.

            A higher SV% on its own is often attributed to some defensive success, especially when a team gets beaten possession wise. The narrative attaches is that a defensive system is working, a player is contributing defensively, or that a goaltender has elevated his game. This is also almost never necessarily true.

            If you want evidence of this phenomena, check out FN posts from 14/15, anything regarding Bouma’s contract extension, Bob Hartley’s Jack Adams, anything on Russell/Wideman. For further stuff, check out Toronto blogs in 12/13 and the first half of 13/14, Dallas blogs in 13/14, Colorado blogs since Roy was hired etc etc. You will find people absolutely insisting that unsustainable performances are totally normal, only to eat crow the next year.

            SH% and SV% taken seperately give people the impression that it is only because of improvement, and that numbers will only continue to improve or at least stay the same. As the FTF post detailed, SV% and SH% are incredibly random over a large sample size. They are measures of luck, defensively and offensively, and combining them shows how a team’s luck is impacting their performance. Taken together as PDO, there is a benchmark for where we can see over/underperformance and which teams and players will likely regress/ improve in the future.

            I also caution about saying that knowing a player or team’s SH% or SV% is “knowing more.” Those are reflective stats that only tell you what happened. They cannot tell you anything other than that. They will always fluctuate, and if you bet on them, you’ll have problems.

          • ngthagg

            “If you have something better, I’d like to hear it.”

            Actually, I’ve got an idea that’s been kicking around in my since the end of last season. Both SH% and SV%, and therefore PDO, are calculated based on shots and goals. ie, SH% = x goals/y shots. But we don’t count just shots on net anymore, we include missed shots and blocked shots. So why not count those for SH% as well?

            This would give us a Corsi Conversion Rate, or CCR. Or in other words, how often does a given player (or team) convert a Corsi attempt into a goal?

            It’s got some advantages. The increased sample size of shots means CCR will fluctuate less than SH%, giving us a more trustworthy number. It unifies the per 60 rates, since Corsi/60 * CCR = Goals/60, by definition. And there may be some insights to be gained regarding blocked shots, if we compare CCR with the similar FCR (Fenwick Conversion Rate).

            On the goaltending side, I’m not sure where to proceed. Counting missed shots makes sense to me, since an NHL goaltender doesn’t leave a lot of net to shoot at. A shot that misses 3 inches to the inside is credited as a save, but a shot that misses 3 inches to the outside isn’t credited at all. But it seems to me that blocked shots shouldn’t be counted for the goalie at all. But if goalies are measured by a Fenwick SV%, they won’t be directly comparable to a shooter being measured by CCR. So there’s definitely some work to do there.

            It’s worth mentioning as well that this doesn’t do away with PDO. It can be calculated with Corsi and Fenwick numbers the same as it can be calculated with shot numbers.

          • That wouldn’t help any understanding of puck luck, you would just be expanding the scope of the SH% stat (G/SH to G/C).

            Shots and corsi are good measures of offensive production because they’re a demonstratably repeatable talent. Regardless of whether a team’s PDO is 900 or 1200 on any given night, they can have the same number of shots in those two games. SH% and SV% on any given night can randomly fluctuate. That’s why PDO is a useful measure of luck and S/C/F are useful measures of production.

  • KACaribou

    My main issue with the analytics community is summed up in this one line, and I am a tweener:

    “The problem is there are intractable differences between traditional hockey thought and newer, evidence-based evaluation.”


    Traditional hockey guys also go by “evidence-based evaluation.”

    Their evidence-based evaluation, is often more evidence-based than the analytics guys too. It’s called being an EYE WITNESS.

    When police go to a crime scene, they can gather all the stats they want, calculating probabilities: but ultimately they prefer an EYE WITNESS.

    Eye witnessing is indeed also EVIDENCE-BASED.

    • Byron Bader

      Funny enough eye witness testimony is often very, very wrong. The more detail provided the worse the accuracy. The more time that passes since the incident the worse the accuracy.

      Similar rules apply to drafting and system evaluation. If the eye test in hockey wasn’t devoid of these same problems … teams wouldnt miss 80% of the time when drafting and 90% when finding a player that truly makes an impact in the game.

    • ChinookArchYYC

      Of course, you have to watch the games to evaluate the on ice. Advanced Stats are meant to augment and help an examination not replace it. Who on earth would rather look at a spreadsheet of results without the context of watching games and players? In order to provide a value, advanced stats must give an evaluator data, the human eye can’t see (like how much does a single player impact possession). Also, after collecting a ton of data, we can begin to question some sacred cows of hockey thinkinv (i.e. Do blocked shots help win games?)

  • freethe flames

    I always think of the fancy stats as a tool to evaluate the game not as a tool to predict the game. An earlier writing asked about the idea of stats that cause possession numbers to change. Things like O zone hits, Face off % must contribute to possession and these matter. I also think some possession numbers can be misleading and have to be accessed through the viewing of the game.

    Here’s an example that I think frequently happens with the Flames. Back’s line starts in the D zone and he loses the face off he and his line mates work hard to regain possession and they advance the puck to the ozone and at the end of the shift he takes a weak shot at net that creates a Ozone face off and out trots Monny’s line. Backs weak shot at times frustrates me but at the end of the shift he has created possession.

    Contrast this with when he wins the face off and they attack and created a real legitimate scoring chance that also results either in a goal or another Ozone possession for one of the other lines. The causation is face off wins or loses.

    How many times during the game does an offensive forecheck result in a change of possession, or a good hit at the defensive blueline change possession. These stats make a difference and teams need to look at players who do these things well.

  • oddclod

    Stats based decisions alone can’t secure you success. It’s just not moneyball. Too many variables. A necessary tool in the box no doubt. The title of the article predispositioned me not to read it. But it’s dead zone time for hockey news after an exciting offseason so far. After actually reading it I see the title and quality of the article in contrast. Good article no less. Thanks for the good read in these low times.

  • OKG

    PDO is a poor stat. It fails to account for too many factors because it

    1) Is restricted to “shots”, which are a poor stat
    2) Assumes shots are all equal. An empty netter with a lead is equal as a strong side wrister into a goalie’s chest. It is also often used as an all situations stat rather than single situation, which is equally idiotic.
    3) Is used as a default “reason” a team has a good or bad year. The 2014-15 Flames had great SH% and AVG SV%? PDO predicted they should have AVG SH℅ and AVG SV℅ in 2015-16. Instead they had great SH℅ and League-worst SV℅ – what did PDO actually tell us? Nada. By combining the two stats you have actually attempted to disvalue both SV% and SH%.
    4) If it doesn’t always regress to 1.00 – what use is it? If you cannot predict what it SHOULD regress to for a given team, you cannot predict WHETHER it should regress at all.
    5) Almost absolutely HAS to have an inverse correlation to shot totals. There is data that shows goalies see worse SV℅s when they face less shots. Which btw is also a flaw with SV% and SH℅ as individual stats. Unless SF and SA are close to league average, PDO should not be.

    Someday we will have stats that can better predict the things PDO pretends to, but does not. But since PDO is largely useless, it will never matter. Skaters (Wingers/centres/TJ Brodie) do reliably drive on-ice SH℅, skaters (defensemen/centres/TJ Brodie) do drive on-ice SV℅, so handwaving it all off is weak. In general any single stat that gets your instincts in hand-wave mode should be taken with a grain of salt, no single statistic can accurately quantify a player’s value. You need multiple statistics to use statistics. Often some of which are unrecorded at present. This is especially true of leagues outside the NHL. I am always open to more information. I am always closed to using isolated stats as a handwave tool, or a “Mark Fayne Will Make The Oilers A Contender” tool all the same.

  • Slatekeeper

    There has also been an evolution of provocative writers on the analytics side, like this Travis Yost at TSN who harshly criticizes any transaction that don’t align favourably to shot differential. Just outshooting your opponent does not significantly increase the probability of winning. Quantity of quality scoring opportunities matters far more than all shots from everywhere being weighted equally.

    And what about making extra passes to get a higher probability scoring chance? Shoot shoot shoot isn’t necessarily better than pass pass shoot.

    The team with fewer shots wins more than you think. If that’s true, why does Corsi matter at all?