DeepAI
Log In Sign Up

MambaNet: A Hybrid Neural Network for Predicting the NBA Playoffs

10/31/2022
by   Reza Khanmohammadi, et al.
0

In this paper, we present Mambanet: a hybrid neural network for predicting the outcomes of Basketball games. Contrary to other studies, which focus primarily on season games, this study investigates playoff games. MambaNet is a hybrid neural network architecture that processes a time series of teams' and players' game statistics and generates the probability of a team winning or losing an NBA playoff match. In our approach, we utilize Feature Imitating Networks to provide latent signal-processing feature representations of game statistics to further process with convolutional, recurrent, and dense neural layers. Three experiments using six different datasets are conducted to evaluate the performance and generalizability of our architecture against a wide range of previous studies. Our final method successfully predicted the AUC from 0.72 to 0.82, beating the best-performing baseline models by a considerable margin.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

08/05/2021

Using Machine Learning to Predict Game Outcomes Based on Player-Champion Experience in League of Legends

League of Legends (LoL) is the most widely played multiplayer online bat...
02/07/2020

Refining Constructive Hybrid Games

We extend the constructive differential game logic (CdGL) of hybrid game...
03/11/2018

Deep reinforcement learning for time series: playing idealized trading games

Deep Q-learning is investigated as an end-to-end solution to estimate th...
11/05/2019

A Bayesian Quest for Finding a Unified Model for Predicting Volleyball Games

Unlike what happens for other popular sports such as football, basketbal...
10/08/2021

Computing an Optimal Pitching Strategy in a Baseball At-Bat

The field of quantitative analytics has transformed the world of sports ...
09/28/2022

VREN: Volleyball Rally Dataset with Expression Notation Language

This research is intended to accomplish two goals: The first goal is to ...

1 Introduction

T Alhanai and MM Ghassemi contributed equally to this work and should be considered shared senior authors.

Sporting events are a popular source of entertainment, with immense interest from the general public. Sports analysts, coaching staff, franchises, and fans alike all seek to forecast winners and losers in upcoming sports match-ups based on previous records. The interest in predicting sporting outcomes is particularly pronounced for professional team sport leagues including the Major League Baseball (MLB), the National Football League (NFL), the National Hockey League (NHL), and the National Basketball Association (NBA); postseason plays in these leagues, namely the playoffs, are are of greater interest than games in the regular season because teams compete directly for prestigious championships titles.

The development of statistical models to robustly predict the outcome of playoff games from year-to-year is a challenging machine learning task because of the plethura of individual, team and extenral factors that all-together confound the propensity of a given team to win a given game in a give year. In this work, we develop

MambaNet: a large hybrid neural network for predicting the outcome of a basketball match during the playoffs. There are five main differences between our work and previous studies: (1) we use a combination of both player and team statistics;(2) we account for the evolution in player and team statistics over time using a signal processing approach; (3) we utilize Feature Imitating Networks (FINs) [1] to embed feature representations into the network; (4) we predict the outcome of playoff results, as opposed to season games; and (5) we test the generalizability of our model across two distinct national basketball leagues. To assess the value of our proposed approach, we performed three experiments that compare MambaNet’s to previously-proposed machine learning algorithms using NBA and Iranian Super League data.

Figure 1: An overview of MambaNet’s architecture. First, the home (column 1, yellow boxes) and away (column 1, purple boxes) teams’ stats and the two teams’ players’ stats are fed to the network. Next, four FINs are utilized to represent the input stats’ signal features, which contain trainable (column 1, dark circles) and non-trainable (column 2, light circles) layers. These representations are further processed with convolutional and Dense layers. Raw time-domain signal features are also extracted from input stats using LSTM networks. Finally, the aforementioned features are incorporated to make the final prediction.
# Statistics # Statistics # Statistics
1 Minutes Played 2 Field Goal 3 Field Goal Attempts
4 Field Goal Percentage 5 3-Point Field Goal 6 3-Point Field Goal Attempts
7 3-Point Field Goal Percentage 8 Free Throw 9 Free Throw Attempts
10 Free Throw Percentage 11 Offensive Rebound 12 Defensive Rebound
13 Total Rebound 14 Assists 15 Steals
16 Blocks 17 Turnover 18 Personal Fouls
19 Points 20 True Shooting Percentage 21 Effective Field Gaol Percentage
22 3-Point Attempt Rate 23 Free Throw Attempt Rate 24 Offensive Rebound Percentage
25 Defensive Rebound Percentage 26 Total Rebound Percentage 27 Assist Percentage
28 Steal Percentage 29 Block Percentage 30 Turnover Percentage
31 Usage Percentage 32 Offensive Rating 33 Defensive Rating
34 Winning Percentage (Team-only) 35 Elo Rating (Team-only) 36 Plus/Minus (Player-only)
Table 1: A description of game statistics used in this work. Except for the last three features, the rest (1 to 33) are shared statistics in representing both teams and players. (Abr: Abbreviation, #: feature number)

2 Related Work

The NBA is the most popular contemporary basketball league [2, 3]. Several previous studies have examined the impact of different game statistics on a team’s propensity to win or lose a game [4, 5]. More specifically, previous studies have identified teams’ defensive rebounds, field goal percentage, and assists as crucial contributing factors to succeeding in a basketball game [6]; for machine learning workflows, these game attributes may be used as valuable input features to predict the outcome of a given basketball game [7, 8].

Probabilistic models to predict the outcome of basketball games have been proposed by several previous studies. Jain and Kaur [9]

developed a Support Vector Machine (SVM) and a Hybrid Fuzzy-SVM model (HFSVM) and reported 86.21% and 88.26% accuracy in predicting the outcome of basketball games. More recently, Houde

[10]

experimented with SVM, Gaussian Naive Bayes (GNB), Random Forest (RF) Classifier, K Neighbors Classifier (KNN), Logistic Regression (LR), and XGBoost Classifier (XGB) over fifteen game statistics across the last ten games of both home and away teams. They also experimented over a more extended period of NBA season data, starting from 2018 to 2021, and reported 65.1% accuracy in winners/losers classifications. In contrast to Kaur and Houde, that addressed game outcome prediction as a binary classification task, Chen et al

[11]

identified the winner/loser by predicting their exact final game scores. They used a data mining approach, experimenting with 13 NBA game statistics from the 2018-2019 season. After feature selection, this number shrank to 6 critical basketball statistics for predicting the outcome. In terms of classifiers, the authors experimented with KNN, XGB, Stochastic Gradient Boosting (SGB), Multivariate Adaptive Regression Splines (MARS), and Extreme Learning Machine (ELM) to train and classify the winner of NBA matchups. The authors also studied the effect of different game-lag values (from 1 to 6) on the success of their utilized classifiers and indicated that 4 was found to perform best on their feature set.

Fewer studies have used Neural Networks to predict the outcome of basketball games; this is mostly due to challenges of over-fitting in the presence of (relatively) small basketball training datasets. Thabtah et al [12] trained Artificial Neural Networks (ANN) on a wide span of data where they extracted 20 team stats per NBA matchup played from 1980 to 2017. Their model obtained 83% accuracy in predicting NBA game outcomes; they also demonstrated the significance of three-point percentage, free throws made, and total rebounds as features that enhanced their model’s accuracy rate.

3 Methods

Baseline approach: A majority of the existing studies use a similar methodological approach: For each team (home and away), a set of game statistics (the features) are extracted over previous games (the game-lag value [11]) forming an matrix. Then, the mean of each stat is calculated across the n games, resulting in a

feature vector for each team. The two feature vectors and concatenated yielding a

vector for each unique matchup between a given pair of teams. Finally, this results in a matrix which is used to train classification model (each experiment will report the train/set set size in more detail). Alternatively, the label of each sample indicates whether the home team won () or lost the game ().

FIN Training: Our method follows the same steps as the baseline approaches, but with one critical difference: instead of calculating the mean of features across the last games using the mean equation, we feed the entire

matrix to a pretrained mean FIN and stack hidden layers on top of it (hereafter, this FIN-based deep feedforward architecture is referred to as FINDFF) to perform binary classification; In addition to the mean feature, we also imitate standard deviation, variance, and skewness.

All FINs are trained using the same neural architecture: A sequence of dense layers with 64, 32, 16, 8, and 4 units are stacked, respectively, before connecting to a single-unit sigmoid layer. The activation function of is ReLU, for the first two hidden layers and the rest are Linear. Each model is trained in a regression setting by using 100,000 randomly generated signals as the training set and handcrafted feature values for each signal as the training labels. Then, we freeze the first three layers, finetune the fourth layer, and remove the remaining two layers before integrating them within the larger network structure of the MambaNet network.

Mambanet: In Figure 1, we provide an illustration of Mambanet - our proposed approach. The complete set of player and team statistics used in this study can also be found in Table 1. The input to the network is an stats matrix which are passed to both the pretrained FINs as well as LSTM layers to extract a team’s statistics’ sequential features. For each team, we also extract the a stats matrix (, ) for each of its roster’s top ten players and pass them to the same FINs and LSTM layers. Next, we flatten teams’ signal feature representations and feed them to dense layers, whereas for players, we stack them and feed them to 1D convolutional layers. Finally, all latent representations of a team and its ten players are concatenated in the network before connecting them to the last sigmoid layer.

4 Experiments & Results

We performed three experiments to assess the performance of our proposed method. To demonstrate the advantage of leveraging FINs in deep neural networks, we first compare the performance of FINDFF against a diverse set of other basketball game outcome prediction models trained using NBA data. For the second experiment, these models are tested for generalization across unseen basketball playoff games from the Iranian Super League data. Finally, we assess the performance of Mamabanet for accurate playoff outcome prediction. In all three experiments, the Area Under the ROC Curve (AUC) was used as our primary evaluation metric.

Ref FC Alg AUC
17-18 18-19 19-20 20-21 21-22
[9] 33 SVM 0.65 0.57 0.59 0.50 0.60
TW 33 FINDFF 0.71 0.62 0.71 0.55 0.65
[10] 15 GNB 0.60 0.52 0.60 0.52 0.55
RF 0.62 0.62 0.60 0.58 0.60
KNN 0.49 0.53 0.63 0.60 0.64
SVM 0.55 0.53 0.64 0.51 0.61
LR 0.61 0.65 0.65 0.61 0.66
XGB 0.63 0.67 0.65 0.50 0.59
TW 15 FINDFF 0.68 0.77 0.69 0.76 0.70
[13] 14 NBAME 0.51 0.53 0.53 0.57 0.59
TW 14 FINDFF 0.62 0.59 0.64 0.60 0.62
[11] 6 ELM 0.53 0.55 0.55 0.53 0.64
KNN 0.58 0.53 0.51 0.56 0.55
XGB 0.60 0.58 0.53 0.53 0.55
MARS 0.63 0.53 0.53 0.59 0.57
TW 6 FINDFF 0.69 0.65 0.57 0.63 0.66
[12] 20 ANN 0.55 0.55 0.53 0.58 0.53
TW 20 FINDFF 0.59 0.68 0.67 0.61 0.65
Table 2: A performance comparison between FINDFF and other previously-developed machine learning models on five years of NBA Playoffs, from the 2017-2018 season (17-18) to the 21-22 season. (Ref: Reference, FC: Feature Count, Alg: Algorithm, TW: This Work)

4.1 Experiment I

This experiment aims to determine whether using FINs in conjunction with deep neural networks can enhance playoff outcome prediction. We followed the same machine learning pipeline as previous studies to compare FINDFF. However, we applied a pretrained mean FIN to the matrix instead of taking the mean directly, providing an almost identical setting when comparing FINDFF with other classic machine learning algorithms. Since the FIN is the only differing component in this setting, its effects can be easily studied. Dataset: All data were gathered from NBA games played from 2017-2018 to 2021-2022 over five seasons. We used each year’s season games as training data (1,230, 1,120, 1,060, 1,086, and 1,236 games from 2017-2018 to 2021-2022) and playoff games as testing data (82, 82, 83, 85, and 87 games from 2017-2018 to 2021-2022), leaving us with five different NBA datasets.

Results: In Table 2, we compare FINDFF models with five other methods from the literature using different features (game statistics), game-lag values, and classification algorithms. The FINDFF network successfully outperformed all other methods with a 0.05 to 0.15 AUC margin in every year of NBA data, demonstrating the advantage of the feature imitation technique in game outcome prediction.

Ref FC Alg AUC
17-18 18-19 19-20 20-21 21-22
[4] 33 SVM 0.52 0.55 0.55 0.57 0.55
TW 33 FINDFF 0.62 0.57 0.67 0.72 0.59
[5] 15 GNB 0.52 0.55 0.59 0.52 0.52
RF 0.59 0.55 0.55 0.72 0.67
KNN 0.52 0.52 0.55 0.52 0.55
SVM 0.52 0.72 0.67 0.52 0.52
LR 0.52 0.52 0.55 0.52 0.55
XGB 0.55 0.72 0.55 0.72 0.64
TW 15 FINDFF 0.59 0.70 0.67 0.72 0.64
[8] 14 NBAME 0.52 0.55 0.52 0.52 0.55
TW 14 FINDFF 0.61 0.66 0.66 0.59 0.64
[6] 6 ELM 0.52 0.55 0.55 0.52 0.55
KNN 0.52 0.67 0.55 0.59 0.67
XGB 0.52 0.67 0.60 0.52 0.52
MARS 0.74 0.59 0.74 0.62 0.68
TW 6 FINDFF 0.71 0.71 0.71 0.62 0.71
[7] 20 ANN 0.59 0.52 0.55 0.55 0.52
TW 20 FINDFF 0.62 0.67 0.62 0.67 0.71
Table 3: A performance comparison between FINDFF and other previously-developed machine learning models trained on five years of NBA (from 17-18 to 21-22) on the 2020-2021 Iranian Super League Playoffs (Ref: Reference, FC: Feature Count, Alg: Algorithm, TW: This Work)

4.2 Experiment II

The purpose of this experiment is to examine the generalizability of methodologies from the first experiment. As we mentioned in Experiment 1, each model is trained and tested on five different years of NBA data. In this experiment, we still train these models on the five NBA datasets but test them on Iranian Super League playoffs. This allows us to compare how generalized each method is when predicting test cases from a significantly different data source.

Dataset: For training purposes, we used the same NBA datasets discussed in Experiment 1. But, to test them, we used the 2020-2021 Iranian Basketball Super League playoffs.

Results: As shown in Table 3, FINDFF models outperformed almost all other methodologies in predicting the outcome of the Iranian Basketball Super League playoffs by a range of 0.02 to 0.12 AUC.

4.3 Experiment III

The first two experiments showed how FINs provided higher, and more generalizable performance in playoff outcome prediction compared to the baselines. After developing MambaNet by building on top of the FINDFF architecture, we aim to demonstrate how integrating other components may affect our hybrid model’s performance.

Dataset: We used the same NBA datasets introduced in Experiment 1.

Results: In Table 4, we present the results of our incremental experiment. The first row reports the simplest version of MambaNet using 35 team features that are passed to a FINDFF network imitating the mean (m) as a feature. Compared with the baseline, we use a more extensive set of basketball game statistics to form the feature vector of a team since this helps better satisfy the data-intensive requirement of neural networks. At this stage, the AUC varies between 0.70 to 0.72. Next, we trained three more FINs to imitate Standard Deviation std, Variance v, and Skewness s using the same neural network architecture as the mean FIN. The second row of the table shows how adding new signal feature representations improves the AUC up to 0.10 and 0.03 in the 2018-2019 and 2020-2021 NBA datasets. Furthermore, we integrated players’ statistics alongside team statistics, leading to an 0.02 increase in AUC across four NBA datasets in the third row. Lastly, as shown in the fourth row, we used RNN layers to create a time-series representation of the game and individual statistics, resulting in 0.03 and 0.02 improvements in 2019-2020 and 2021-2022, respectively.

R FS FC IF Layers AUC
17-18   18-19   19-20   20-21    21-22
1 T 35 m Dense 0.71 0.70 0.71 0.72 0.70
2 T 35
m
std
v
s
Dense
Conv
0.71 0.80 0.71 0.75 0.70
3
T
P
35
34
m
std
v
s
Dense
Conv
0.73 0.82 0.73 0.75 0.72
4
T
P
35
34
m
std
v
s
Dense
Conv
RNN
0.73 0.82 0.76 0.75 0.74
Table 4: Comparing the performance of different MambaNet versions in five years of NBA Playoffs from the 2017-2018 season (17-18) to the 21-22 season. (R: Row, FS: Feature Source, FC: Feature Count, IF: Imitating Feature)

5 Conclusion

In this work, we tackled playoff basketball game outcome prediction from a signal processing standpoint. We introduced MambaNet, which incorporated historical player and team statistics and represented them through signal feature imitation using FINs. To compare our method with the baseline, we used NBA and Iranian Super League data which enabled us to demonstrate the performance and generalizability of our method. Future studies will potentially use fusion techniques or other suitable data modeling techniques, such as graphs, to develop more advanced neural networks that integrate team and player representations more efficiently to predict playoff outcomes more accurately.

References

  • [1] Sari Saba-Sadiya, Tuka Alhanai, and Mohammad M Ghassemi, “Feature imitating networks,” in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 4128–4132.
  • [2] Efehan Ulas,

    “Examination of national basketball association (nba) team values based on dynamic linear mixed models,”

    PLoS ONE, vol. 16, 2021.
  • [3] Kyle Kawashiri, “The societal influence of the nba,” 2020.
  • [4] Shaoliang Zhang, Miguel Ángel Gomez, Qing Yi, Rui Dong, Anthony Leicht, and Alberto Lorenzo,

    “Modelling the relationship between match outcome and match performances during the 2019 fiba basketball world cup: A quantile regression analysis,”

    International Journal of Environmental Research and Public Health, vol. 17, no. 16, 2020.
  • [5] Bence Supola, Thomas Hoch, and Arnold Baca, “The role of secondary assists in basketball – an analysis of its characteristics and effect on scoring,” International Journal of Performance Analysis in Sport, vol. 22, no. 2, pp. 261–276, 2022.
  • [6] Christos Koutsouridis, Dimitrious Lioutas, Christos Galazoulas, Georgios Karamousalidis, and NIKOLAOS STAVROPOULOS, “Original article effect of offensive rebound on the game outcome during the 2019 basketball world cup,” Journal of Physical Education and Sport, vol. 20, pp. 3651–3659, 12 2020.
  • [7] Tomislav Horvat and Josip Job, “The use of machine learning in sport outcome prediction: A review,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, pp. e1380, 09 2020.
  • [8] Rory P. Bunker and Fadi Thabtah, “A machine learning framework for sport result prediction,” Applied Computing and Informatics, vol. 15, no. 1, pp. 27–33, 2019.
  • [9] Sushma Jain and Harmandeep Kaur, “Machine learning approaches to predict basketball game outcome,” in 2017 3rd International Conference on Advances in Computing,Communication & Automation (ICACCA) (Fall), 2017, pp. 1–7.
  • [10] Matthew Houde, Predicting the Outcome of NBA Games, Bryant University, 2021.
  • [11] Wei-Jen Chen, Mao-Jhen Jhou, Tian-Shyug Lee, and Chi-Jie Lu, “Hybrid basketball game outcome prediction model by integrating data mining methods for the national basketball association,” Entropy, vol. 23, no. 4, 2021.
  • [12] Fadi A. Thabtah, Li Zhang, and Neda Abdelhamid, “Nba game result prediction using feature analysis and machine learning,”

    Annals of Data Science

    , vol. 6, pp. 103–116, 2019.
  • [13] Ge Cheng, Zhenyu Zhang, Moses Ntanda Kyebambe, and Nasser Kimbugwe, “Predicting the outcome of nba playoffs based on the maximum entropy principle,” Entropy, vol. 18, no. 12, 2016.