The 20172018 NBA season culminated in a final showdown between the Cleveland Cavaliers and the Golden State Warriors for the fourth year in a row. While predicting those two teams to make the final round of the playoffs was perhaps the easiest call ever made, we wanted to see if the other 14 teams that made a postseason berth could be predicted. No statistic quite tells the whole story of a teams’ season like postseason odds, or the probability of making the playoffs. We love this stat, because when plotted over the course of an entire season, the hopes and dreams of an entire fanbase can be seen lifted up to unimaginable heights, or crushed into the nothingness that we all eventually become.
The Seattle Mariners this season, in 2018, had the chance to break their 16 year postseason drought, the current longest in the four professional sports. Until, of course, they let the upstart Oakland A’s overtake them while historically collapsing on their own. This can be seen in their playoff odds chart, which after rising to the highest of highs, plummets to the lowest of lows where the hopelessness can only be matched by the despair of Browns fans.
Meg Rowley shared Fangraphs’ version of the Mariners’ playoff odds mountain, which went semiviral in the baseball community, making an appearance in Grant Brisbee’s Grantland:
For most of this year, we have been developing our version of playoff odds using techniques from Empirical Bayes. We believe that our version is easier to understand, less resource intensive, and more accurate at the beginning of the season, when there are small sample sizes, than other popular versions of this stat. Now that the baseball season is winding down, we are excited to announce our application of Bayesian Playoff Probabilities to the NBA! If you thought we were only a baseball blog, with the occasional Super Bowl article, well, apparently you missed our one other post on the NBA, where we determined the Orlando Magic were the the best team to purchase as a financial investment over the next few years. Feel free to argue that point with us in the comments! Before we publish our 20182019 predictions, however, we wanted to see how our methodology worked for the 20172018 season. We did something similar for the 2017 MLB season. First, we developed a prior for each team’s winning percentage (wp%), which is simply a probability distribution. Each prior is modeled, as with all distributions, by a mean, average or expected value, and a variance, or spread. When we have a mean and a variance, we can use the method of moments to determine any parameters that describe the distributions. For our case, we use the Beta distribution, which is defined by two parameters, alpha and beta: Beta(α, β). With a mean known as x̄, and a variance known as s^2, α and β can be found algebraically by rearranging the following two equations and solving for α and β:
This can actually be resolved into a basic Python function with only three lines:
So for each NBA team, we just need to come up with a mean and variance, and then we are all set to create our personalized prior! Unfortunately, this is not as easy as it sounds, as there is no exact method to come up with these statistics. Here is what we did: For the expected mean, we took the average of eight expert win predictions for each team for the 20172018 season. These predictions were a combination of over/unders set by casinos in Vegas, including Westgate and OddsShark, as well as win predictions created by expert panels from ESPN, USA Today, and CBS Sports. The average of expert predictions are usually just as good, if not better, than the predictions of any one expert picked at random. This phenomenon is known as Wisdom of the Crowd. If you are a loyal reader (thank you so much! <3) you may remember that we proved that the expert median is actually more accurate than the expert average. In this case, the average actually slightly outperformed the median in MAE and RMSE. For our 20182019 predictions, since we don’t actually know the resulting win totals yet, we will have to decide whether to use the expert average or median for the expected mean of our priors. Below are the expert averages, which we will use for our expected mean, per team, as well as their 20172018 actual win total. Not too bad! The second, and last, thing we need for our priors is a variance. I found the win totals for each team’s last five seasons, from 20122016 for this example, converted them to winning percentages, and then calculated the rolling average and variance. Since each team’s 5 year rolling variance varied greatly, I took the median to provide consistency for a final value of 0.01. This variance comprises both the variance of the true talent, as well as luck. To isolate the talent variance, we calculated the variance of luck for each team over 82 games, based on the average expert wins above. After subtracting the variance of luck (0.003), we were left with a variance of true talent of about 0.007 for each team. For each team, then, we calculated our prior distribution parameters using an expected winning percentage equal to the expert average divided by 82, and an expected variance of about 0.007ish by running the above function. For example, the Golden State Warriors were calculated to have a prior distribution modelled by Beta(13.955, 3.24): This looks pretty good! Here is every team’s expected winning percentage, with 80909599% confidence intervals, as well as their actual winning percentage in 20172018. Most of the teams’ actual wp% fall within the 80% CI, the narrowest interval, while all 30 teams fell within the 99% CI. Now that we have our priors, we can calculate a posterior distribution for each team’s expected winning percentage for every day of the regular season. This is done by simply adding the number of wins a team has to their alpha parameter, and the number of losses to the beta parameter. This works because each individual game is being modelled by the Binomial distribution, which behaves really nicely with the Beta distribution. We try not to delve too much into the mathematical weeds on this site, so we’ll leave the proof of this as an exercise to the reader (or just click this link). The Golden State Warriors lost their first game of the season, so their posterior distribution modelling their true winning percentage becomes Beta(13.955, 4.24): We repeat this for every day of the regular season: The distribution narrows as we gather more information about the team. Consequently, so do the confidence intervals. Here is a GIF of every team’s winning percentage modeled by a posterior distribution on every day of the regular season. Feel free to pause the slideshow to see the progression of a single team! So how do we turn these winning percentage distributions into playoff probability? We use Monte Carlo simulation to calculate expected wins of course! For any particular day of the regular season, it is our fundamental belief that a team will regress towards their true winning percentage, which lies somewhere in their posterior distribution for that specific day. By sampling from that distribution, we can create an estimate for expected wins by multiplying the sampled wp% by the number of games left in the season and adding it to a team’s currently “banked” wins. Here’s an example. The Golden State Warriors lost their first game of the season, giving them a record of 01 with 81 games left to play. From the posterior distribution for that day, let’s say we get a sampled wp% of 0.75. 0.75 * 81 = 60.75. Since they have 0 banked wins, we can say that based off of this one sample, we expect the Warriors to win 61 games this season. By comparing the expected wins for each team from one sample, we can record the top 8 teams from each conference as making the playoffs. Now, instead of resting on our laurels after one sample, we repeat this process 100,000 times. Since we record who makes the playoffs each time, we calculate playoff odds by seeing how many time a team made the playoffs out of the 100,000 samples. We also calculate a more probable expected win total by averaging the expected wins across all 100,000 samples per team. This also equals the mean of the posterior distribution multiplied by the number of games left plus the banked wins. Taking the Warriors example again, after their first loss, they were still onpace for an expected win total of 62. When their playoff odds were calculated, in over 99,000 samples, the Warriors finished in the top 8 in wins in their conference. This makes sense, as their 62 expected wins still far outpaced any other team in the NBA after one game. So how did our playoff projections do? For the Western conference, our preseason playoff projections could not have been better. Our top 5 teams, all above 80% odds, all made the playoffs, including the Warriors, Rockets, Timberwolves, Thunder, and Spurs. However, we had 5 teams fighting for the last 3 playoff spots. Denver and the LA Clippers each had about a 60% chance, while the Jazz, Pelicans, and Trailblazers hovered around 50%. In fact, each of those three teams at 50% made it, while the two at 60% missed out. Due to the battle for the last spot, the playoff odds did not converge until the middle of March. In the Eastern Conference, our top 6 teams made the playoffs, 5 at around 90% and then the Heat at 76%. Including the Heat, we had four teams we thought were most likely to snag those last three spots, the Heat, Hornets, Pistons, and Philadelphia. While the Heat and 76ers did make the playoffs, the one surprise were the Pacers, who we gave only a 17% chance of making the playoffs in the preseason. This can be attributed to the Pacers far outperforming the experts we averaged for our prior. The Eastern Conference converged almost a month earlier than the Western Conference, with their playoff teams essentially determined by the middle of February. The 17% odds for the Pacers was the lowest of any team to make the NBA playoffs last season. Apart from that outlier, the Jazz and Pelicans all had a relatively healthy 47% chance based off of their priors. Similar to the Mariners this year, the Pistons were the NBA team that had an epic collapse last season. The Pistons went an NBA worst 211, and dropped their playoff odds from 95% to 15% from January 1 to January 31, essentially squashing all their playoff hopes.
In our opinion, our methodology to calculate playoff odds using Empirical Bayes did a great job describing the 20172018 NBA season. Our priors accurately gave the best teams in each conference the highest chance of making the playoffs, while teams that still had the potential to make the postseason, like the Pacers, were given a fighting chance.
How do our expected wins look? Thanks for asking! Our predictions for expected wins were all mostly horizontal. This means that most teams did not deviate much from our prior, or preseason expectations. The exceptions to this were the 76ers and Pacers winning more than most experts thought, and the Hornets losing more than expected. Regardless, all were within our preseason 99% confidence interval. The convergence of the playoff odds per conference is perhaps misleading, as based off of expected wins, there was still quite the fight for seeding. Apart from that, though, we think these charts quantitatively show that the Western Conference was more competitive than the Eastern Conference in terms of making the playoffs last season. We can’t wait to apply this methodology to the 20182019 NBA season! Those predictions should be up before tipoff in the next few weeks. Do you have any questions about our methodology, conclusions, or code? Let us know in the comments below! As always, all of our visuals, code, and data can be found on our GitHub. The SaberSmart Team P.S. If you enjoyed this article, and need something off Amazon anyway, why not support this site by clicking through the banner at the bottom of the page? As a member of the Amazon Affiliates program, we may receive a commission on any purchases. All revenue goes towards the continued hosting of this site.
Comments

Archives
January 2019
Categories
All
