1 Introduction
Basketball is changing with fast pace. It is broadly considered that the game has entered a new era, with a different style of play dominated by threepoint shots. An enjoyable and illustrative overview of the main changes is given in [4], where the author demonstrates the drastic changes in the game through graph illustrations of basketball statistics.
Advanced statistics have played a keyrole in this development. Beyond the common statistics, such as shoot percentages, blocks, steals, etc., the introduction of advanced metrics, (such as the “Player Efficiency Rate”, “True Percentage Rate”, “Usage Percentage”, “Pace factor”, and many more), attempt to quantify “hidden” layers of the game, and measure a player’s or a team’s performance and efficiency on a wide range of fields. In Layman’s terms, the following exampel is illustrative. A good offensive player, but bad defender, might score 20 points on average, but concede 15. An average offensive player, but good defender might score on average 10 points, but deny 10 points (i.e. 5 good defensive plays), not through blocks or steals, but through effective personal and team defense. Using traditional basketball statistics, the former player scores more points but overall is less efficient to the team.
A major contribution to the field of advanced basketball statistics is made in [10], where the author “demonstrates how to interpret player and team performance, highlights general strategies for teams when they’re winning or losing and what aspects should be the focus in either situation.”. In this book, several advanced metrics are defined and explained. In this direction, further studies have contributed too, see [8] and [9].
With the assistance of the advanced statistics modern NBA teams have optimised the franchise’s performance, from acquiring players with great potential, which is revealed though the advanced metrics, a strategy wellknown as moneyball, to changing the style of the game. A modern example is the increase in the threepoint shooting, a playing strategy also known as “three is greater than two”.
Despite the huge success of the advanced statistics, these measures are mostly descriptive. Predictive models started arising for NBA and college basketball in the past ten years.
The authors in [1]
first used machine learning algorithms to predict the outcome of NBA basketball games. Their dataset spans the seasons 19911992 to 19961997. They use standard accumulated team features (e.g. rebounds, steals, field goals, free throws etc.), including offensive and defensive statistics together with wins and loss, reaching 30 features in total. Four classifiers were used, linear regression, logistic regression, SVMs and neural networks. Linear regression achieves the best accuracy, which does not exceed 70% on average (max 73%) across seasons. The authors observe that accuracy varies significantly across seasons, with “
some seasons appear to be inherently more difficult to predict”. An interesting aspect of their study is use of the “NBA experts” prediction benchmark, which achieves 71% accuracy.In [14]
, the author trains three machine learning models, linear regression, maximum likelihood classifier and a multilayer perceptron method, for predicting the outcome of NBA games. The dataset of the study spans the seasons 2003–2004 to 2012–2013. The algorithms achieve accuracy 67.89%, 66,81% and 68.44% respectively. The author used a set of eight features and in some cases PCA boosted the performance of the algorithms.
The authors of [16]
use machine learning approaches to predict the outcome of American college basketball games (NCAAB). These games are more challenging than NBA games for a number of reasons (less games per season, less competition between teams, etc.). They use a number of machine learning algorithms, such as decision trees, rule learners, artificial neural networks, Naive Bayes and Random Forest. Their features are the four factors and the adjusted efficiencies, see
[10, 16], all teambased features. Their dataset extends from season 2009 to 2013. The accuracy of their models varies with the seasons, with Naive Bayes and neural networks achieving the best performance which does not exceed 74%. An interesting conclusion of their study is that there is “a “glass ceiling” of about 74% predictive accuracy that cannot be exceeded by ML or statistical techniques”, a point that we also discuss in this study.In [2], the authors train and validate a number of machine learning algorithms using NBA data from 2011 to 2015. They assess the performance of the models using 10fold crossvalidation. For each year, they report the bestperforming model, achieving a maximum accuracy of 68.3%. They also enrich the gamedata with players’ injury data, but with no significant accuracy improvements. However, a better approach for validating their model would be to assess it on a holdout set of data, say the last year. In their study, different algorithms achieve the best performance in every season, which does not allow us to draw conclusions or apply it to a new dataset; how do we know which algorithm is right for the next year?
The authors of [7]
focused on two machine learning models, SVM and hybrid fuzzySVM (HFSVM) to predict NBA games. Their dataset is restricted to the regular season 20152016, which is split into training and test sets. Initially, they used 33 teamlevel features, varying from basic to more advanced team statistics. After applying a feature selection algorithm, they concluded to 21 most predictive attributes. The SVM and HFSVM achieve 86.21% and 88.26% average accuracy respectively on 5fold crossvalidation. Their high accuracy though could be an artefact of the restricted dataset, as it is applied to a single season, which could be an exceptionally good season for predictions. A more robust method to test their accuracy would be to use
fold cross validation on the training set for parameter tuning and then validate the accuracy of the model on a holdout set of games (which was not part of crossvalidation). Otherwise, the model might be overfitted.In [11], the authors use deep neural networks to predict the margin of victory (MoV) and the winner of NBA games. They treat the MoV as a regression model, whereas the game winner is treated as a classification model. Both models are enhanced with a RNN with timeseries of players’ performance. Their dataset includes data from 1983. They report an accuracy of 80% on the last season.
Although advanced statistics have been introduced for more than a decade in the NBA, European basketball is only recently catching up. The Euroleague ^{1}^{1}1The Euroleague is the toptier European professional basketball club competition. organised a student’s competition in 2019 ^{2}^{2}2The SACkathon, see http://sackathon.euroleague.net/., whose aim was “
to tell a story through the data visualization tools
”.Despite the oustanding progress on the prediction of NBA games, European basketball has been ignored in such studies. European basketball has a different playing style, the duration of the games is shorter and even some defensive rules are different. Applying models trained on NBA games to European games cannot be justified.
Predicting Euroleague games has a number of challenges. First, the Euroleague new format is only a few years old, therefore data is limited compared to the NBA, where one can use tens of seasons for modelling. Also, the number of games is much lower, 30 in the first three seasons compared to about 80 games in NBA. Therefore, the volume of data is much smaller. Secondly, Euroleague teams change every season. Euroleague has closed contracts with 11 franchises which participate every year, however the remaining teams are determined through other paths ^{3}^{3}3https://www.euroleaguebasketball.net/euroleaguebasketball/news/i/6gt4utknkf9h8ryq/euroleaguebasketballalicenceclubsandimgagreeon10yearjointventure. Hence, Euroleague is not a closed championship like the NBA, teams enter and leave the competition every year, making it more challenging to model the game results.
When the current study started, no study existed that predicted European basketball games. During this project, a first attempt was made by the Greek sportsite sport24.gr
, which was giving the probabilities of the teams winning a Euroleague game, probabilities which were updated during the game. Their model was called the “Pythagoras formula”
^{4}^{4}4https://www.sport24.gr/Basket/liveprognwseisnikhssthneyrwligkaapotosport24grkaithnpythagorasformula.5610401.html. According to this source, “the algorithm was developed by a team of scientists from the department of Statistics and Economics of the University of Athens in collaboration with basketball professionals. The algorithm is based on complex mathematical algorithms and a big volume of data”. However, this feature is no longer available on the website. Also, no performance information is available about it.More recent attempts on predicting European basketball games can be found on Twitter accounts, which specialise in basketball analytics. Particularly, the account “cm simulations” (@cm_silulations) gives the predictions of Euroleague and Eurocup games each week for the current season 20192020. On a recent tweet ^{5}^{5}5https://twitter.com/CmSimulations/status/1225901155743674368?s=20, https://twitter.com/CmSimulations/status/1226233097055850498?s=20, they claim that they have achieved an accuracy of 64.2%, for rounds 7 to 24 of this season. However, no details of the algorithm, the analysis and the methodology have been made public.
In this article, we aim to fill this gap, using modern machine learning tools in order to predict European basketball game outcomes. Also, we aim to underline the machinelearning framework and pipeline (from data collection, feature selection, hyperparameter tuning and testing) for this task, as previous approaches might have ignored one or more of these steps.
From the machinelearning point of view, this is a binary classification problem, i.e. an observation must be classified as 1 (home win) or 2 (away win). For our analysis we focus on the Euroleague championship in the modern format era covering three seasons, 20162017, 20172018 and 20182019. The reason we focus on the Euroleague is entirely personal, due to the experience and familiarity of the authors with the European basketball. The reason we focus on the latest three seasons is that prior to 2016, the championship consisted of several phases in which teams were split in several groups and not all teams played against each other. In the modern format, there is a single table, where all teams play against each other. This allows for more robust results.
We should emphasize that the aim of the paper is to extend and advance the knowledge of the machine learning and statistics in basketball, noone should use it for betting purposes.
This article is organised as follows. In Section 2, the dataset, its size, features, etc., is described, whereas in Section 3 a descriptive analysis of the dataset is presented. Section 4 focuses on the machinelearning predictive modelling and explains the calibration and validation processes, while it also discusses further insights such as the “wisdom of the crowd”. We summarise and conclude in Section 6
2 Data Description
The data is collected from the Euroleague’s official website ^{6}^{6}6https://www.euroleague.net/ using scraping methods and tools, such as the Beautiful Soup package ^{7}^{7}7https://pypi.org/project/beautifulsoup4/ in Python. The code for data extraction is available on Github ^{8}^{8}8https://github.com/giasemidis/basketballdataanalysis. The data collection data focuses on the regular seasons only.
2.1 Data Extraction
Data is collected and organised in three types, (i) game statistics, (ii) season results and (iii) season standings.
Game statistics includes statistics of each team in a game, such as the offense, defense, field goal attempted and made, percentages, rebounds, assists, turnovers etc. Hence, each row in the data represents a single team in a match and its statistics.
Season results data files include gamelevel results, i.e. each row correspond to a game. The data file stores the round number, date, home and away teams and their scores.
Season standings data files contains the standings with the total points, number of wins and losses, total offense, defense and score difference at the end of each round.
3 Descriptive Analysis
In this section, we perform a descriptive analysis of the data, understanding some key aspects of the European basketball in the new format era, i.e. the seasons 2016  2017, 2017  2018 and 2018  2019. For simplicity, we refer to each season using the end year of the season, e.g. 2016  2017 is referred to as season 2017 thereafter.
In Figure 1
, we plot the distribution of the scores for the Home and Away teams for each of the three first seasons of the modern era. We observe that Home team distribution is shifted to higher values than the Away team distributions. Also the latter are wider for the season 2017 and 2018. Median and 75th quantile values for the Home Team distributions increase from 2017 to 2019, i.e. Home teams tend to score more.
In Figure 2 the Home Team Score distributions are plotted for wins (blue) and losses (red). The Win distributions are shifted to higher values in recent seasons, i.e. Home teams score more points on average each year in victories, whereas in losses the median score remains constant. Home Team victories are achieved when the Home team score is around 85 points, where in losses their median score is 76 points.
A similar plot, but for the Away Teams is shown in Figure 3. Same conclusions as before can be drawn.
Finally, in Figure 4 the distribution of points difference for each season for Home and Away wins respectively. The distribution of the score difference for the Home wins remains unchanged across the years, (the median) of the Home wins are determined with less than 9 points difference, whereas
(the first quartile) of the Home wins the game difference is 5 points or less. For Away wins, the distribution of differences is shifted to lower values, with
of the Away wins end with 8 points difference or less, and of the Away wins end with 4 points difference or less. In Layman’s term, one could say that is harder to win on the road.We summarise, a few more statics in Table 1. We notice that at least of the games end in Home win every year. The average Home Score has an increasing trend, same as the average Home Score of the Home wins. However, the Away scores fluctuate for seasons 2017  2019.
Season  Home Wins  Away Wins  Mean Home Score  Mean Away Score  Mean Home Win Score  Mean Away Win Score 

2017  152  88  80.8  77.5  78.9  79.6 
2018  151  89  82.6  78.8  80.5  81.0 
2019  155  85  82.8  78.6  80.6  80.9 
It should be emphasised that these trends and patterns across the years are very preliminary, as there are only three regular seasons completed so far, and no statistically significant conclusions can be made for trends across the seasons. One conclusion that seems to arise is that Home court is an actual advantage as it has a biggest share in wins and also teams tend to score more, and hence win, when playing at Home.
3.1 Probability of Winning
Now, we explore the probability of a team to win when it scores more than points. Particularly, we would like to answer the question: When a team scores at least, say, 80 points, what is the probability of winning?
The results are plotted in Figure 5. We plot the probability of winning for all games, the Home and the Away games respectively. It becomes evident that for the same points, a Home team is more likely to win than an Away team, which means that the defence is better when playing at home. We also observe that in Euroleague a team must score more than 80 points to have a chance better than random to win a game.
The conclusion of this descriptive analysis is that patterns remain unchanged across the years and no major shift in basketball style has been observed during these three years of the new format of the Euroleague. This is an important conclusion for the modelling assumptions discussed in the next section.
4 Predictive Modelling
In this section, we formulate the problem and discuss the methodology, the feature selection, calibration and testing phases.
4.1 Methodology
From the machinelearning point of view, the prediction of the outcome of basketball games is a binary classification problem, one needs to classify whether a game ends as 1 (home win) or 2 (away win).
Second, we need to define the design matrix of the problem, i.e. the observations and their features. Here, we experiment with two approaches, matchlevel and teamlevel predictions, elaborated in the next subsections.
4.1.1 Matchlevel predictions
At the matchlevel, the observations are the matches and the features are:

Position in the table, prior the game, of the home and away team respectively.

The average offense of the season’s matches, prior the game, of the home and away team respectively.

The average defense of the season’s matches, prior the game, of the home and away team respectively.

The average difference between the offense and defense of the season’s matches, prior the game, of the home and away team respectively.

The form, defined as the fraction of the past five games won, of the home and away team respectively.

The finalfour flag, a binary variable indicating whether the home (resp. away) team qualified to the finalfour of the
previous season.
At the matchlevel, we consider features of both the home and away teams, resulting in 12 features in total. All features quantify the teams’ past performance, there is no information for the game to be modelled. These features are calculated for each season separately, as teams change significantly from one season to the next. As a result, there is no information (features) for the first game of each season, which we ignore from our analysis.
This list of features can be further expanded with more statistics, such as the average shooting percentage, assists, etc., of the home and away teams respectively. For the timebeing, we focus on aforementioned simple performance statistics and try to explore their predictive power.
4.1.2 Teamlevel predictions
At teamlevel, the observations are teams and the machine learning model predicts the probability of a team to win. To estimate the outcome of a match, we compare the winning probabilities of the competing teams, the one with higher probability wins the match. Hence, we build a teampredictor, with the following features:

Home flag, an indicator whether the team play at home (1) or away (0).

Position of the team in the table prior the game.

The average offense of the season’s matches of the team prior the game.

The average defense of the season’s matches of the team prior the game.

The average difference between the offense and defense of the season’s matches of the team prior the game.

The form, defined as above, of the team.

The finalfour flag, a binary variable indicating whether the team qualified to the finalfour of the previous year.
Now, we have seven features for each observation (i.e. team), but we also doubled the number of observations compared to the matchlevel approach. The machinelearning model predicts the probability of a team to win, and these probabilities are combined to predict the outcome of a game.
As mentioned before, these features can be expanded to include percentage shootings, steals, etc. We leave this for future work.
For the remaining of the analysis, we split the data into two subsets, the 2017, 2018 seasons for hyperparameter calibration, feature selection and training, and the final season 2019 for testing. Also, all features are normalised to the interval , so that biases due to magnitude are removed.
4.1.3 Classifiers
The binary classification problem is a wellstudied problem in the machinelearning literature and numerous algorithms have been proposed and developed [6]. Here, we experiment with the following list of the offtheshelves algorithms that are available in the scikitlearn Python library ^{9}^{9}9https://scikitlearn.org/stable/ [12]. The reason we use these algorithms is twofold; (i) we do not want to reinvent the wheel, we use wellestablished algorithms (ii) the scikitlearn library has been developed and maintained at highstandards by the machinelearning community, constituting an established tool for the field.
The classifiers are:

Logistic Regression (LR)

Support Vector Machines (SVM) with linear and RBF kernels

Decision Tree (DT)

Random Forest (RF)

Naive Bayes (NB)

Gradient Boosting (GB)

Nearest Neighbours (NN)

Discriminant Analysis (DA)

AdaBoost (Ada)
4.2 Calibration
The above classifiers have hyperparameters, which we tune using 5fold crossvalidation and gridsearch on the training set. Although some algorithms have more than one hyperparameters, in the majority of the classifiers, we limit ourselves to tuning only a few parameters in order to speed the computations. These hyperparameters are:

The regularisation strength for LR and SVMlinear

Number of estimators for RF, GB and Ada

Number of neighbours for NN

The regularisation strength and the width of the RBF kernel for the SVMrbf

Number of estimators and learning rate for Ada, abbreviated as Ada2 ^{10}^{10}10We experiment with two versions of the Ada classifiers.
All other hyperparameters are set to their default values.
We record both the accuracy and the weighted accuracy across the folds. For each value of the hyperparameter search, we find the mean score across the folds and report its maximum value. In Figure 6, we plot the scores of the 5fold cross validation process for the aforementioned classifiers at the matchlevel classification. We observe that the Ada and Ada2 classifiers outperform all others for all scores, with gradient boosting coming third.
In Figure 7, we plot the scores of the 5fold cross validation for the classifiers of the teamlevel analysis. We observe that the teamlevel analysis is underperforming the matchlevel models for almost all classifiers.
We conclude that the Ada and Ada2 classifiers at the matchlevel are the bestperforming models, having accuracy score 0.705 and 0.708 respectively, and we select them for further analysis.
4.3 Feature Selection
Feature selection is an important step in machinelearning methodoloty. There are several methods for selecting the most informative features; filter, embedded, wrapper and feature transformation (e.g. Principal Component Analysis (PCA)) methods, see
[5, 15] for further details. Here, we explore the aforementioned methods and attempt to understand the most informative features.4.3.1 Filter Methods
Filter methods are agnostic to the algorithm and using statistical tests they aim to determine the features with the highest statistical dependency between the target and feature variables [5, 15]. For classification, such methods are: (i) the ANOVA Ftest, (ii) the mutual information (MI) and (iii) chisquare (Chi2) test.
We use the matchlevel features and apply the three filter methods. We rank the features from the most informative to the least informative for the three methods. We plot the results in Figure 8. The darker blue a feature is, the most informative it is for the corresponding method. On the other extreme, least informative features are yellowish. We observe some common patterns, the position and F4 features are very informative, whereas the Offence of the Away team and the Defence features are the least informative for all methods.
If some features are less informative, we assess the performance of the model with increasing number of features, from the most informative according to each filter method, adding the less informative ones, incrementally. If some features have negative impact to the performance of the models, we expect the accuracy to peak at some features and then decrease for the rest. However, we do observe, see Figure 9, that the maximum performance scores (accuracy and weighted accuracy) are achieved when all 12 features are included in the model. Hence, although some features are less informative, they all contribute positively to the model’s performance.
As no subset of features can exceed the performance of ”all features” model, we discard filter methods as a potential method for feature reduction/selection.
4.3.2 Pca
Principal Component Analysis (PCA) is a dimensional reduction of the feature space method, see [6] for further details. We run the PCA for increasing number of components, from 1 to the maximum number of features and assess the number of components using the Ada model. In addition, at each value, gridsearch finds the optimal hyperparameter (the number of estimators) and optimal model is assessed ^{11}^{11}11The reason we rerun the gridsearch is that the hyperparameter(s) found in the previous section is optimal to the original featureset. PCA transforms the features and hence a new hyperparameter might be optimal..
The best accuracy and weightedaccuracy scores for different number of components are plotted in Figure 10
. First, we observe that the model’s performance peaks at three components, which capture most of the variance. As more components are added, noise is added and the performance decreases. Second, even at peak performance the accuracy and weighted accuracy do not exceed those of the Ada model with the originals features. Hence, we discard PCA as a potential method for feature reduction/selection.
4.3.3 Wrapper Methods
Wrapper methods scan combinations of features and assess their performance using a classification algorithm and select the best feature (sub)set that maximises the performance score (in a crossvalidation fashion). Hence, they depend on the classification method.
Since the number of features in our study is relatively small (), we are able to scan all possible combinations of features subsets ( in total). As we cannot find the optimal hyperparameters for each subset of features (the search space becomes huge), we proceed in a twostep iterative process. First, we use the best pair of hyperparameters of the “Ada2” algorithm^{12}^{12}12We remind the reader, that we focus on the “Ada2” algorithm, as it is shown in the previous section that it is the bestperforming algorithm for this task. to scan all the combinations of features. Then, we select the combination of features which outperforms the others. In the second step, we retrain the algorithm to find a new pair of hyperparameters that might be better for the selected feature subset.
In first step, we sort the different combinations of features by their performance and plot the accuracy and weighted accuracy of the top ten combination in Figure 11. We observe that the two subsets of features are in the top two feature subset both in accuracy and weighted accuracy. We select these two feature subsets for further evaluation, hyperparameter tuning, final training and testing. These feature tests are:

Model 1: Position Home, Position Away, Offence Home, Offence Away, Defence Away, Difference Away, F4 Away

Model 2: Position Home, Offence Home, Offence Away, Defence Away, Difference Away, F4 Home, F4 Away.
Having selected these two sets of features, we proceed to the step two of the iterative process and recalibrate their hyperparameters. At the end of the twostep iterative process, we identify the following subset of features with their optimal hyperparameters:

Model 1: Ada2 with 141 estimators and 0.7 learning rate achieving 0.7498 accuracy and 0.7222 weighted accuracy.

Model 2: Ada2 with 115 estimators and 0.7 learning rate, achieving 0.7375 accuracy and 0.7066 weighted accuracy.
We conclude that the wrapper method results in the best performing features and hence model, achieving an accuracy of 75% via 5fold crossvalidation. We select these models to proceed to the validation phase.
4.4 Validation
In this section, we use the models (i.e. features and hyperparameters) selected in the previous section for validation in the season 20182019. For comparison, we also include the Ada2 model with all features (without feature selection) as “Model 3”. At each game round, we train the models given all the available data before that round. For example, to predict the results of round 8, we use all data available from seasons 2017, 2018 and the rounds 2 to 7 from season 2019. We ignore round 1 in all seasons, as no information (i.e. features) is still available for that round. ^{13}^{13}13We comment on the “coldstart” problem in the Discussion section. Then we predict the results of the 8 games of that round, we store the results and proceed to the next round.
We benchmark the models against three simple models. These are:

Benchmark 1: Home team always wins. This is the majority model in machine learning terms.

Benchmark 2: The Final4 teams of the previous year always win, if there are no Final4 teams playing against each other, or both teams are Final4 teams, the home team always wins.

Benchmark 3: The team higher in the standings, at that moment in the season, wins.
Accuracy  Weighted Accuracy  

Model 1  0.6595  0.6096 
Model 2  0.6595  0.6211 
Model 3  0.6681  0.6306 
Benchmark 1  0.6509  0.5000 
Benchmark 2  0.7198  0.6446 
Benchmark 3  0.6810  0.7035 
From Table 2 we observe the following. First, the machinelearning models (marginally) outperform the majority benchmark. However, they fail to outperform the more sophisticated benchmarks 2 and 3. We also observe that the models 1 and 2 resulting from feature selection perform worse than model 3 (all features are present). This implies that the models 1 and 2 have been overfitted and no feature selection was required at this stage as the number of features is still relatively small. This is also in agreement with the filter methods feature selection. The performance of benchmark 2 is a special case, as the season 20182019 the teams that qualified to the Final4 of the previous season (20172018) continued their good basketball performance. However, historically, this is not always the case and it is expected that this benchmark not to be robust for other seasons.
We focus on “Model 2” and plot the number of correct games in every round of the season 20182019 in Figure 12, and the model’s accuracy per round in 13. From these Figures, we observe that the model finds perfect accuracy in rounds 20 and 23, whereas it achieves 7/8 in four other rounds. With the exception of 4 rounds the model always predicts at least half of the games correctly. We manually inspected round 11, where the model has its lower accuracy, and there were 4 games that a basketball expert would consider “unexpected”, a factor that might contribute to the poor performance of the model in that round.
In Figure 13 we also plot the trend of the accuracy, which is positive, and indicates that as the seasons progresses, the teams become more stable and the model more accurate.
5 The Wisdom of (Basketball) Crowds
The author is a member of a basketball forum on a social networking platform. On this forum, a basketball predictions championship is organised for fun. In this championship, the members give their predictions for the Euroleague games in the upcoming round. They get 1 point if the prediction is correct, 0 otherwise. At the end of the season the players that come in the top eight of the table qualify to the playoffs phase.
Here, we collect the players’ predictions. For each game in each round, we assign a result, which is the result of the majority prediction of the players ^{14}^{14}14Round 26 is omitted, due to a technical issue on the forum, results were not collected and the round was invalid.. The championship started with about 60 players and finished with 41 players still active, as many dropped out during the course of the season.
We update the Table 3 to include the majority vote of the members of the forum (but excluding the 26th round in the accuracy due to the technical issue  first round is excluded in all performance scores.).
We observe that the collective predictions of the members of the forum have a much better accuracy (and weighted accuracy) than any model in this study (machinelearning or benchmark). This is a known phenomenon in collective behaviour, termed the “wisdom of the crowds” [13], observed in social networks and rumour detection [3] among others. For this study, we adapt the term and refer to it as “the wisdom of the basketball crowds”.
Accuracy  Weighted Accuracy  

Model 1  0.6607  0.6113 
Model 2  0.6563  0.6199 
Model 3  0.6696  0.6331 
Benchmark 1  0.6518  0.5000 
Benchmark 2  0.7188  0.6449 
Benchmark 3  0.6786  0.7027 
Majority Vote of the Members  0.7321  0.6811 
6 Summary and Conclusions
In this article we focused on Euroleague basketball games in the modern format era. We first explored their descriptive characteristics of the last three seasons and highlighted the importance of home advantage. Also, the game characteristics remain stable across the three seasons, allowing us to use data across the seasons for modelling purposes.
We then introduced a framework for machinelearning modelling of the games’ outcomes. This is a binary classification task in machine learning terms. We compared the nine most commonly used algorithms for classification. The AdaBoost algorithm outperforms the others in this classification task. Two classification methodologies were also compared, matchlevel and teamlevel classification, with the former being more accurate. Feature selection and hyperparameter tuning are very essential steps in the machine learning pipeline, and previous studies had ignored either or both of these steps. Wrapper methods proved to outperform filter methods and PCA. Also, proper validation of the models on a holdout set and their benchmarking had been omitted in some of the previous studies (most of them validated their models using fold cross validation).
Our model achieves 75% accuracy using 5fold cross validation. However, this is not the complete picture. The model was properly validated on a holdout set, the season 2018–2019, achieving accuracy 66.8%. This indicates that either the models were overfitted during fold cross validation or the season 2018–2019 is exceptionally difficult to predict. However, the success of simple benchmarks shows that overfitting is more likely to be the case. Overfitting during the training phase (including feature selection and hyperparameter tuning) is always a risk in machine learning. One lesson taken is that fold cross validation is not always an informative and conclusive way to validate a model.
We also compared our results to the collective predictions of a group of basketball enthusiasts. The collective predictive power of that group outperforms any model (machine learning or benchmark) in this study. We are intrigued by this result. We believe that any machine learning model to be considered successful should outperform the human predicting capabilities (either individual or collective). For this reason, we set the accuracy of 73% as the baseline for accuracy of Euroleague basketball games. This is also in agreement with [16], in which the authors found that all machine learning models had a predictive accuracy bounded by 74% (althouth this was for NBA games). Future models for basketball predictions should aim to break this threshold.
6.1 What is next?
There are many potential directions and improvements of this study. First, we aim to repeat this analysis with more team features, such as field goals percentages, field goals made, three pointers, free throws, blocks, steals (normalised either per game or per possesion), etc. Additional features from advanced statistics could be included, such as effective field goals, efficiencies, etc. The feature space then becomes larger and feature reduction techniques might prove meaningful.
Another direction for improvement is the inclusion of players’ information and features. We discussed that the study in [11] achieves 80% accuracy in NBA game predictions by including players’ features in their classifier.
Finally, as the seasons progress higher volumes of data will become available. Although more data does not necessarily lead to more accurate models, we will be able to conduct more analysis and potentially avoid overfitting. In many studies we reviewed in the Introduction, the accuracy of the models vary greatly from one season to the next. More data will provide us with more robust results. Additionally, we will be able to exploit methods such as neural networks which proved successful in [11].
This study is neither a final solution nor the endstory. In contrast, it lays the methodology and bestpractices for further exploring the question of predictions of game outcomes. We will continue retrain and reevaluate our models with the 73% accuracy threshold (the accuracy of the wisdom basketball crowds) in mind.
References
 [1] (2009) NBA oracle. Note: https://www.mbeckler.org/coursework/20082009/10701_report.pdf Cited by: §1.
 [2] (2015) NBA game prediction based on historical data and injuries. Note: http://dionny.github.io/NBAPredictions/website/ Cited by: §1.
 [3] (2016) Determining the veracity of rumours on twitter. In Social Informatics: 8th International Conference, SocInfo 2016, Bellevue, WA, USA, November 1114, 2016, Proceedings, Part I, E. Spiro and Y. Ahn (Eds.), pp. 185–205. External Links: Document, Link Cited by: §5.
 [4] (2019) SprawlBall: a visual tour of the new era of the nba. Houghton Mifflin Harcourt, Bostom, New York. Cited by: §1.
 [5] (2003) An introduction to variable and feature selection. J. Mach. Learn. Res. 3, pp. 1157–1182. External Links: ISSN 15324435, Link Cited by: §4.3.1, §4.3.
 [6] (2009) The elements of statistical learning.. 2nd edition, Springer, New York, USA. Cited by: §4.1.3, §4.3.2.
 [7] (2017Sep.) Machine learning approaches to predict basketball game outcome. In 2017 3rd International Conference on Advances in Computing,Communication Automation (ICACCA) (Fall), Vol. , pp. 1–7. External Links: Document, ISSN null Cited by: §1.
 [8] (2007) A starting point for analyzing basketball statistics. Journal of Quantitative Analysis in Sports 3. External Links: Document, ISSN 15590410 Cited by: §1.
 [9] (2014) An analysis of new performance metrics in the nba and their effects on win production and salary. Master’s Thesis, University of Mississippi, USA. Cited by: §1.
 [10] (2004) Basketball on paper: rules and tools for performance analysis. Potomac Books, Inc., Quicksilver Drive, Dulles, VA, USA. Cited by: §1, §1.
 [11] (2019) Client case study: applying machine learning to NBA predictions. Note: https://blog.oursky.com/2019/11/26/machinelearningapplicationsnbapredictions/ Cited by: §1, §6.1, §6.1.
 [12] (2011) Scikitlearn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §4.1.3.
 [13] (2005) The wisdom of crowds.. edition, Anchor. External Links: ISBN 0385503865 Cited by: §5.
 [14] (201312) Prediction of nba games based on machine learning methods. Master’s Thesis, University of Wisconsin Madison, USA. Cited by: §1.

[15]
(2005)
The curse of dimensionality in data mining and time series prediction
. In Computational Intelligence and Bioinspired Systems, J. Cabestany, A. Prieto, and F. Sandoval (Eds.), Berlin, Heidelberg, pp. 758–770. External Links: ISBN 9783540321064, Document Cited by: §4.3.1, §4.3.  [16] (2013) Predicting college basketball match outcomes using machine learning techniques: some results and lessons learned. arXiv preprint arXiv:1310.3607. External Links: Link Cited by: §1, §6.
Comments
There are no comments yet.