Ensemble methods are learning algorithms that construct a set of classifiers and combine them to classify new unseen data 
. Random forests are a type of ensemble method based on combination of several independent decision trees
. In recent years, the random forests framework and its variants have been successfully applied in practice as a general classification and regression tool. Particularly, random forests have been widely used in computer vision, , , 
and pattern recognition applications, , , , which promotes the state-of-the-art in performance. Despite their successful applications, the theoretical analysis of random forest models is still very difficult, even the basic mathematical properties are very hard to understood. In  and , Biau and colleagues tries to narrow the gap between the theory and practice of random forest. However, the proposed models in these two papers cannot deliver effective results and their running is not efficient.
In this paper, we introduce a novel random forests algorithm based on the cooperative game theory. We adopt the Banzhaf power index to evaluate the power of each feature by traversing all possible coalitions. Due to this, we call the proposed algorithm Banzhaf random forests (BRF). Different from the previously used information gain rate of information theory, which simply chooses the most informative feature, the Banzhaf power index measures the importance of each feature on the dependency among a group of features (coalition). More importantly, We reasonable proved the consistency of the forest, it has made a contribution to narrow the theory and practice gap for random classification forests problems.
The rest of this paper is organized as follows. In Section 2, we provide a brief overview of existing random forests models and analyze their advantage and disadvantage. In Section 3, we introduce the general random forests framework, including the construction of trees and randomness injection. Section 4 describes the proposed algorithm, Banzhaf random forests (BRF), in detail, while Section 5 is devoted to the justification of the consistency of BRF. Section 6 shows the experimental results on some UCI benchmark data sets and Section 7 concludes this paper.
2 Related work
on the random subspace method, the feature selection work of, the way of random split selection of . Based on the seminal work of Breiman ,  suggests that it is best to average across sets of trees with different structures but not any of the constituent trees. Criminisi et al. 
present a unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision and medical image analysis tasks. With the development of random forests in recent years, they have been applied to a wide variety of real world problems, , , .
Despite the successful applications of random forests in practice, the mathematical properties behind them have not been well understood. For example, the early theoretical work of 
, which is essentially based on mathematical heuristics, is not formalized to rigorous theory.
In theory, there are two main properties of theoretical interests related to random forests. One is the consistency of the models, that whether it can converge to an optimal solution as the data set grows infinitely large. The other is the rate of convergence. Our paper mainly focuses on consistency, which  has proved that Breiman’s random forests cannot guarantee.
To design consistent random forests, many researchers have struggled in this trend. Meinshausen 
has shown that an algorithm of random forests for quantile regression is consistent; Ishwaran and Kogalur have shown the consistency of their survival forests model; Denil et al.  show the consistency of an online version of random forests, while  presents a new random regression forests. These consistent models can be applied to either regression, survival or online settings, but not to batch classification settings where all the training data can be used together for learning. In this paper, we propose a novel random forests model based on the cooperative game theory for multi-class classification problems. The consistency of the proposed algorithm is also proved.
Two more closely related papers to our work are  and .  proves the consistency of some popular averaging classifiers, including random forests. Specifically, the authors take  as a weighted layered nearest neighbor classifier from the perspective of taxonomy proposed by . Unfortunately, this property prevents the consistency of random tree classifiers. To remedy the inconsistency of tree classifiers, the authors suggest the technique introduced in . Moreover,  has also proposed a scale-invariant version of random forests with consistency. Recently,  presents a new model of random forests, which is similar to the original algorithm of . The main difference between these two models is in how random features are selected.  requires a second independent data set to evaluate the importance index of each feature and uses this property to prove the consistency for their algorithm, while the model of  doesn’t need the second data set. In this paper, we use the Banzhaf power index to evaluate the power of each feature by traversing all possible feature coalitions, but not employing the second data set. The consistency of the proposed algorithm is theoretically guaranteed.
3 Random Forests
In this section we briefly review the random forests framework. Typically, random forests are built by combining the predictions of several trees, each of which is trained in isolation. Unlike in boosting , where the base models are trained and combined using a dynamic weighting scheme, the trees are trained independently and the predictions of the trees are combined through averaging or majority voting. For a more comprehensive review, please refer to  and .
To construct a random tree, three core steps are required: the first is the method for splitting the tree nodes; the second is the type of predictor to use in each leaf, and the third is the method of injecting randomness into the trees.
In a typical method for splitting nodes, splitting depends on whether or not they exceed a threshold value in a chosen feature. Alternatively, for linear splits, a linear combination of features are compared with a threshold to make decision. The threshold value in either case can be chosen randomly or by optimizing a function of the data. For example, the Gini index and information gain rate are commonly used. In this paper, we choose the midpoint of a feature as the splitting threshold, which leads to the proposed algorithm to be very efficient, especially in the case of large scale applications.
In order to split a node of each tree, candidate features of data are generated and a criterion is evaluated to choose between them. A simple strategy, as in the models analyzed in , is to choose among the features uniformly at random. A more common approach is to choose the candidate split which optimizes a purity function over the nodes that would be created. Particularly, two typical choices are to maximize the information gain  and minimize the Gini index. In our Banzhaf random forests, we use the Banzhaf power index of the cooperative game theory , which measures the distribution of power among the features on the data sets.
For the choice of predictors,  propose several different leaf predictors for regression and other tasks. One common consideration is to average predictors over the training points which fall in that leaf. The other consideration may based on majority voting with points in that leaf. In our work, we take the last strategy.
It is important to inject randomness into the trees for random forests. This can be achieved in several ways. One choice is on the features to be split at each node; the other one is the coefficients for random combinations of features. One common method is to build each tree using a bootstrapped or sub-sampled data set. In this way, each tree in the forest is trained on slightly different data, which introduces differences between the trees. Similar to , our work uses a bootstrapped method to inject randomness into each tree.
4 Banzhaf Random Forests
In this section, we describe the proposed algorithm, Banzhaf random forest (BRF), in detail. Firstly, we introduce some basic concepts of cooperative game theory. Secondly, based on the Banzhaf power index, we introduce the way to construct the randomized trees. Thirdly, we combine the Banzhaf trees to formulate the Banzhaf random forests. Finally, we present the prediction method about the Banzhaf random forests.
4.1 Basic concepts of cooperative game theory
Cooperative game theory mainly studies an ‘acceptable’ way of distributing gains collectively achieved by a group of cooperating agents . A cooperative profit game consists of a player set
and a characteristic function. For each subset , can be interpreted as the profit achieved by the players in . The usual goal in cooperative game is to distribute the total gain of the global coalition among each player in fair and reasonable ways. Different requirements on the fairness and rationality derive different solution concepts of the cooperative game. Such as the core, the Banzhaf power index and some related concepts of approximate core. Among various solution concepts the concept of Banzhaf power index that is motivated by fairness.
For a game , if it is monotone, i.e., it satisfied for every pair of coalitions such that , and its characteristic function only takes value 0 and 1, i.e., , , this game is called a simple game. In a simple game , the coalitions with value 1 are called to ‘winning’, and that with value 0 are called ‘losing’, i.e., , and , respectively. Each coalition that wins when loses is called a swing for player , because the membership of player in the coalition is crucial to the ’winning’. In fact, Banzhaf power index is to count the number of winning coalitions, when the player joining some losing coalitions, to find the most crucial player that it can let the majority of coalitions winning.
Banzhaf power index, which yields an unique outcome in coalitional games, is proposed to measure the marginal contribution of players in the game 
. In simple games, the Banzhaf power index have a particular attractive interpretation: it measures the power of a player, i.e., the probability that he can influence the outcome of the game. In this paper, we use Banzhaf power index to measure the power of each feature.
4.2 Construction of Banzhaf tree
Figure 1 shows the structure of a Banzhaf decision tree. For the root node, the feature is selected with information gain rate. For all the other nodes, the features are selected with the Banzhaf power index. The idea of Banzhaf decision tree are mainly motivated by game theory, especially, the cooperative game theory. We take the features of data as the players in a game, then the original tree construction problem is transformed into a cooperative ‘feature’ game. At each node, features in the form of the coalition are selected and the best one is split.
Next, we first present the way to compute the Banzhaf power index in this work.
The original definition of Banzhaf power index is described in . Given a cooperative game with , the Banzhaf power index of a player is the probability of swings for play . We denote the Banzhaf power index as and it is given by
where is the marginal contribution of player . i.e. .
Banzhaf power index measures the distribution of power among the players in cooperative games. Here, we apply it for the decision tree construction, attempting to estimate the power of each feature for each tree node. The power of each feature can be measured by averaging the contributions that it makes to each of the subset which it belongs to. Let coalitionbe a candidate feature subset and feature is to be estimated. Define the ratio to represent the impact of feature on coalition , where can be interpreted as the number of features that fall into interdependence relationship with the feature , and be the number of features in the coalition . Therefore, we define a threshold value . If (commonly ), we call the coalition ‘losing’, otherwise ‘winning’, i.e.
Here, means that feature is the key to make the coalition to exhibit better performance. The threshold value 1/2 means, if more than half of the features are interdependent with , it will join in the coalition to make it ‘winning’. Hence, for simplicity of the computation, we define in Eq. (1) as
For clarity, here, we give an example to show how to compute the Banzhaf power index. Given a cooperative ‘feature’ game , the feature player set . Suppose, currently, the goal is to calculate the Banzhaf power index of . The total number of possible coalitions of feature subsets is 7 (except ), for all . Assume the winning coalitions with respect to are , , , i.e. half of the coalitions are interdependent with feature. Then the Banzhaf power index of can be computed as
Similarly, the value of Banzhaf power index for other features can be computed as the same way. Generally, Banzhaf power index is hardly to be zero in large scale and high dimensional applications.
In order to evaluate the impact of feature , it needs to calculate the proportion of the ‘winning’ coalitions. That will lead to a high computational complexity, but our model only randomly selected a small group of features to compute the Banzhaf power index at each node. Hence, the computational complexity is fairly low.
To calculate the proportion of the ‘winning’ coalitions, we use conditional mutual information of information theory to evaluate the interdependent between a single and the feature player . If more than half of feature players are interdependent , then have .
In our paper, the condition mutual information is defined as the amount of the interdependent between feature player and feature player given the feature player colation . It is formally defined by
4.3 Banzhaf random forests algorithm
Given a training data set , it includes samples and the dimensionality of data is . The procedures of the Banzhaf random forests (BRF) algorithm can be described as follows.
For the construction of each Banzhaf decision tree in BRF, randomly draw samples with replacement using bootstrap and randomly select features without replacement from the training data. Base on this data set , grow a recursive Banzhaf tree.
For the root node, the feature is selected with information gain rate. For all the other nodes, the features are selected with the Banzhaf power index. The feature associated with the corresponding node is split at the midpoint of the feature values, to generate the left and right branches.
If a (terminal) node has the percentage of incorrectly assigned samples less than , then stop building the Banzhaf tree, where is a pre-specified number.
BRF predicts the labels of test data based on the votes it received from each Banzhaf tree.
Our algorithm is similar to the original algorithms of . Both of them used bootstrap aggregating i.e., bagging ensemble algorithm. The main difference between BRF and the algorithm of  is in how the feature associated with a node is selected. BRF uses Banzhaf power index, while Breiman’s method use the Gini index. Another difference is, BRF splits each node at the midpoint of the feature values but Breiman’s algorithm does not. More importantly, as shown in next section, the consistency of BRF is theoretically guaranteed, but that of Breiman’s algorithm is not.
We have also tested the model of pure Banzhaf random forests, i.e. the feature of the root node is also selected via the Banzhaf power index. Their performance is generally worse than that of the BRF algorithm described as above. One reason for this result may be that the feature selected via information gain rate at the root node may present some important invariant information of data.
We denote a recursive tree created in the BRF algorithm based on data as , where
are i.i.d. pairs of random variables such that
(the feature vector) takes its value inwhile (the label) is a multiclass random variable. To make a prediction for a query point , each Banzhaf decision tree computes,
where denotes the node of the tree containing , and is the number of points that located in . Then the tree prediction is the class which maximizes that:
The forest predicts the class with the most votes from the individual trees.
In this section, we prove the consistency of Banzhaf random forests. We denote the Banzhaf tree created by Banzhaf random forests trained on data as . The consistency of a sequence is defined as follows.
Definition 1 A sequence of classifier is consistent for a given distribution of , that is, the probability of prediction error of converges in probability to the Bayesian risk,
as . Here, denotes the randomness in the tree-building algorithm, is the training data set and the probability in the convergence is over the random selection of
. The Bayesian risk is the probability of prediction error of the Bayesian classifier, which makes predictions by choosing the class with the highest posterior probability,.
In order to reduce the complexity of the issue, we consider that multi-class classifier can be transformed to combination of several binary-class classifier. So, we need to prove the consistency of estimators of the posterior distribution of each class. A similar result was shown by Denil et al .
Lemma 1 Suppose we have the estimates, , for each class posterior and that these estimates are each consistent. The classifier
is consistent for the corresponding multi-class classification problem.
Proof. By definition, the rule
achieves the Bayes risk. In the case where all the are equal there is nothing to prove, since all choices have the same probability of error. So, suppose there is at least one such that and define
The function is the margin function which measures how much better the best choice is than the second best choice. The function measures the margin of . If then has the same probability of error as the Bayes classifier.
The assumption above guarantees that there is some such that . Using to denote the number of classes, by making large it can satisfy
since is consistent. Thus
So with probability at least we have
Since ia arbitrary this means that the risk of converges in probability to the Bayes risk.
Lemma 1 allows us to prove the consistency of the multiclass classifier can be transformed to prove the consistency of several two class posterior estimates. i.e., Given a set of classes we can re-assign the labels using the map for any in order to get a two class problem where in this new problem is equal to in the original multiclass problem.
Then, we are inspired by . The following Lemma 2 allows us to focus our attention on the consistency of each of the tree estimators in the classification forests.
Lemma 2 Assume that the sequence of randomized classifiers is consistent for a certain distribution of . Then the voting classifier obtained by taking the majority vote over (not necessarily independent) copies of is also consistent.
Proof. Let denote the Bayes classifier. Consistency of is equivalent to saying that . In fact, since for all , consistency of means that -almost all ,
Define the following indices
which means it suffices to show that for all . However, using to denotes (possible dependent) copies of , for all we have
By Markov’s inequality,
According to Lemma 2, we conclude that the consistency of Banzhaf random forests is implied by the consistency of the trees which composed of. In addition, we use the bagging ensemble method to construct BRF. So by the Theorem 1 in , we know that the consistency of a voting Banzhaf random forests which follows from the consistency of the base classifier. Here, Biau et al. introduce a parameter . In the bootstrap sample , each data pair is present with probability which is independent from each other.
Theorem 1 Let be a sequence of classifier that is consistency for the distribution of . Consider the Banzhaf random forests (majority voting classifiers) , using parameter . If as then both classifiers are consistent.
Proof. See that for Theorem 1 in .
With Lemma 2 and Theorem 1 established, the remainder of effort goes into proving the consistency of a Banzhaf tree construction. For each tree in the Banzhaf forests is established based on the Banzhaf index. We show that if a classifier is condition consistency which consists of a small group of random variable, and uses the Banzhaf power index to sampling for this sample process for this random variable generates acceptable sequences with probability 1, then the resulting classifier is unconditionally consistent.
Theorem 2 Suppose is a sequence of classifiers whose probability of error converges conditionally in probability to the Bayes risk for a specified distribution on , i.e.
for all , is a random sequence produced by Banzhaf power index, and that is a distribution on . If which means produce acceptable sequence with probability value is 1, then the probability of error converges unconditionally in probability, i.e.
is consistent for the specified distribution.
Proof. The sequence in question is uniformly integrable, so it is sufficient to show that implies the result, where the expectation is taken over the random selection of training set and is the specific structure of the tree, . We can write
By assumption then we have
Since probabilities are bounded in the interval , the dominated convergence theorem allows us to exchange the integral and the limit,
and by assumption the conditional risk converges to the Bayes risk for all , so
which is the desired result.
In fact, let the Banzhaf power index is equal to the income distribution function in a tree construction game ,i.e., . Because we chose the maximize Banzhaf power index for each node of each tree. We can obtain a acceptable random variable sequence that all with the maximize Banzhaf power index. By , these random variable sequence cooperative can obtain the best result. So it is sufficient to show that the Banzhaf tree is consistent conditioned on such a sequence.
In conclusion, we proved the consistency of our tree construct by the Theorem 2. Because the Theorem 1 is established, we can achieve the consistency of Banzhaf random forests.
To evaluate the proposed algorithm, BRF, we tested it on several data sets from the UCI machine learning repository, including iris, wine, ecoli, thyroid, soybean, shuttle, dermatology, sonar and musk2. We compare it with Breiman’s random forests  and the model proposed in . We implemented Breiman’s random forest with C4.5 as it generally performs well on classification problems. As mentioned above, the model proposed in 
is consistent. For comparison, we also listed the classification results yielded by k-nearest neighbor classifier (KNNs) and support vector machines (SVM).
Table 1 shows the specific information of the used UCI data sets.
6.1 Effect of the number of trees in BRF
To evaluate the effect of the number of trees in BRF, we conducted experiments on three data sets: iris, ecoli and shuttle. Fig. 2 shows the obtained classification accuracy against the number of trees in BRF. We can see that, BRF is basically robust with the number of trees. Particularly, when the number of trees equals to 100, BRF performs slightly better than other values.
6.2 Comparison on running efficiency
To test the running speed of BRF, we performed experiments on seven data sets: iris, wine, ecoli, soybean, thyroid, dermatology and shuttle. We compared it with the model of  and that of . From Table 2, we can see that, the running of BRF is slower than the model of . This is mainly because calculation of the Banzhaf power index needs some time when constructing the trees. However, BRF is more efficient than the model of , which is a state-of-the-art consistent random forests model.
6.3 Classification results
To evaluate BRF on multi-class classification problems, we compared it with KNNs, SVMs, the model of , and the model of . Nine UCI data sets were used. They are iris, wine, ecoli, thyroid, soybean, shuttle, dermatology, sonar and musk2. For all these data sets, we used 5-fold cross validation to test the models. The average classification accuracies are reported. For the model of  and BRF, we used the same number of trees in the random features. Following Breiman’s suggestion for classification problems , we set the number of trees to , where is the dimensionality of features. To be fair, we set up the same termination conditions for all the random forests models, i.e. the percentage of incorrectly assigned samples at the termination node should be no greater than the number of classes on a data set. For KNNs and SVMs, we selected the parameter with 5-fold cross validation.
Table 3 shows the results obtained by the compared models and BRF. We can see that BRF performs slightly better than KNNs, SVMs and the model of , and consistently better than the model of . This demonstrates that using interdependent features to construct the randomized trees can lead to better results than using independent features in random forests.
In this paper, we propose a novel random forests model called Banzhaf random forests (BRF) based on the concepts of the cooperative game theory. It’s consistency is proved, which takes a step towards narrowing the gap between the theory and practice of random forest. This work is probably the first one that apply the cooperative game theory to random forests, and we have tested and verified the feasibility of the idea. Experiments on UCI data sets show that BRF not only slightly outperforms state-of-the-art classifiers, including KNNs, SVMs and the random forests model by Breiman , but much more efficient than existing consistent random forests.
This research was supported by the National Natural Science Foundation of China (NSFC) under Grant no. 61271405 and 61403353, and the Fundamental Research Funds for the Central Universities of China.
-  Zhou, Zhi-Hua: Ensemble methods: foundations and algorithms. CRC Press (2012)
-  Breiman, Leo.: Random forests. Machine learning, vol. 45, pp. 5–32. Springer (2001)
-  Lepetit, Vincent and Fua, Pascal: Keypoint recognition using randomized trees. Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 28, pp. 1465–1479. IEEE (2006)
-  Ozuysal, Mustafa and Fua, Pascal and Lepetit, Vincent: Fast keypoint recognition in ten lines of code. Computer Vision and Pattern Recognition, 2007, CVPR’07. pp. 1–8. Ieee (2007)
-  Shotton, Jamie and Sharp, Toby and Kipman, Alex and Fitzgibbon, Andrew and Finocchio, Mark and Blake, Andrew and Cook, Mat and Moore, Richard: Real-time human pose recognition in parts from single depth images. Communications of the ACM, vol. 56, pp. 116–124. ACM (2013)
-  Zikic, Darko and Glocker, Ben and Criminisi, Antonio: Atlas encoding by randomized forests for efficient label propagation. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013, pp. 66–73. Springer (2013)
-  Winn, John and Criminisi, Antonio: Object class recognition at a glance. In Video Proc. CVPR (2006)
-  Yin, Pei and Criminisi, Antonio and Winn, John and Essa, Irfan: Tree-based classifiers for bilayer video segmentation. Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pp. 1–8. IEEE (2007)
-  Bosch, Anna and Zisserman, Andrew and Muoz, Xavier Image classification using random forests and ferns. Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pp. 1–8. IEEE (2007)
-  Shotton, Jamie and Johnson, Matthew and Cipolla, Roberto: Semantic texton forests for image categorization and segmentation. Computer vision and pattern recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–8. IEEE (2008)
-  Biau, Gérard and Devroye, Luc and Lugosi, Gábor: Consistency of random forests and other averaging classifiers. The Journal of Machine Learning Research, vol. 9, pp. 2015–2033. JMLR. org (2008)
-  Biau, Gérard: Analysis of a random forests model. The Journal of Machine Learning Research, vol. 13, pp. 1063–1095. JMLR. org (2012)
-  Breiman, Leo and Friedman, Jerome and Stone, Charles J and Olshen, Richard A.: Classification and regression trees. CRC press (1984)
-  Breiman, Leo.: Bagging predictors. Machine learning, vol. 24, pp. 123–140. Springer (1996)
-  Ho, Tin Kam: The random subspace method for constructing decision forests, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, pp. 832–844. IEEE (1998)
-  Amit, Yali and Geman, Donald: Shape quantization and recognition with randomized trees. Neural computation, vol. 9, pp. 1545–1588. MIT Press (1997)
-  Dietterich, Thomas G.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine learning, vol. 40, pp. 139–157. Springer (2000)
-  Kwok, Suk Wah and Carter, Chris: Multiple decision trees. arXiv preprint arXiv:1304.2363 (2013)
Criminisi, Antonio and Shotton, Jamie and Konukoglu, Ender: Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Foundations and Trends® in Computer Graphics and Vision, pp. 81–227 (2012)
-  Svetnik, Vladimir and Liaw, Andy and Tong, Christopher and Culberson, J Christopher and Sheridan, Robert P and Feuston, Bradley P.: Random forest: a classification and regression tool for compound classification and QSAR modeling. Journal of chemical information and computer sciences, vol. 43, pp. 1947–1958. ACS Publications (2003)
-  Prasad, Anantha M and Iverson, Louis R and Liaw, Andy: Newer classification and regression tree techniques: bagging and random forests for ecological prediction. Ecosystems, vol. 9, pp. 181–199. Springer (2006)
-  Cutler, D Richard and Edwards Jr, Thomas C and Beard, Karen H and Cutler, Adele and Hess, Kyle T and Gibson, Jacob and Lawler, Joshua J.: Random forests for classification in ecology. Ecology, vol. 88, pp. 2783–2792. Eco Soc America (2007)
-  Criminisi, Antonio and Shotton, Jamie: Decision forests for computer vision and medical image analysis. Springer Science & Business Media (2013)
-  Breiman, Leo.: Consistency for a simple model of random forests. Statistical Department, University of California at Berkeley. Technical Report, (2004)
-  Meinshausen, Nicolai: Quantile regression forests. The Journal of Machine Learning Research, vol. 7, pp. 983–999. JMLR. org (2006)
-  Ishwaran, Hemant and Kogalur, Udaya B.: Consistency of random survival forests. Statistics & probability letters, vol. 80, pp. 1056–1064. Elsevier (2010)
-  Denil, Misha and Matheson, David and de Freitas, Nando: Consistency of online random forests. arXiv preprint arXiv:1302.4853 (2013)
-  Denil, Misha and Matheson, David and De Freitas, Nando: Narrowing the gap: Random forests in theory and in practice. arXiv preprint arXiv:1310.1415, (2013)
-  Lin, Yi and Jeon, Yongho: Random forests and adaptive nearest neighbors. Journal of the American Statistical Association, vol. 101, pp. 578–590. Taylor & Francis (2006)
-  Györfi, L and Devroye, L and Lugosi, G.: A probabilistic theory of pattern recognition. Springer-Verlag, (1996)
-  Schapire, Robert E and Freund, Yoa: Boosting: Foundations and Algorithms. Kybernetes, vol. 42, pp. 164–166. Emerald Group Publishing Limited (2013)
-  Hastie, Trevor and Tibshirani, Robert and Friedman, Jerome and Hastie, T and Friedman, J and Tibshirani, R.: The elements of statistical learning. vol. 2. Springer(2009)
-  Banzhaf III, John F.: Weighted voting doesn’t work: A mathematical analysis. Rutgers L. Rev., vol. 19, pp. 317. HeinOnlined (1964)
Chalkiadakis, Georgios and Elkind, Edith and Wooldridge, Michael: Computational aspects of cooperative game theory. Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 5, pp. 1–168. Morgan & Claypool Publishers (2011)