The decision tree is a non-parametric supervised learning method used for classification and regression. Although the decision tree methods have been one of the first machine learning approaches, it remains an actively researched domain in machine learning. It is not only simple to understand and interpret, but also offers relatively good results, computational efficiency and flexibility. The general idea of decision trees is to predict unknown input instances by learning simple decision rules inferred from several known training instances. Decision trees are most often induced in the following top-down manner. A given data set is partitioned into a left and right subset by a split criterion test on attributes. The highest scoring partition which reduces the average uncertainty mostly is selected and the data set is partitioned accordingly into two child nodes, growing the tree by making the node be the parent of the two newly created child nodes. This procedure is applied recursively until some stopping conditions, e.g. maximum tree depth or minimum leaf size, are reached.
Generally speaking, split criterion is a fundamental issue in decision trees induction. A large number of decision tree induction algorithms with different split criteria have been proposed. For example, the Iterative Dichotomiser 3 (ID3) algorithm is based on Shannon entropy; the C4.5 algorithm is based on Gain Ratio which is considered as a normalized Shannon entropy ; while the Classification And Regression Tree (CART) algorithm is based on Gini index 
. These algorithms seem to be independent, and it is hard to judge which algorithm always outperforms others. Actually, it reflects one drawback of this kind of split criteria is that they lack adaptability to data sets. Numerous alternatives have been proposed for the adaptive entropy estimate[4, 5], but their statistical entropy estimates are too complex to lose the simplicity and comprehensibility of decision trees. Most of all, to the best of our knowledge, there is not a unified framework combining all the above criteria together. In addition, a series of papers have analyzed the importance of the split criterion [6, 7]. They demonstrated that different split criteria have substantial influence on the generalization error of the induced decision trees. This is the inspiration of our proposed new split criterion unifying and generalizing the classical split criteria.
To address the above issue, we propose a Tsallis entropy framework in this paper. Tsallis entropy is a generalization of Shannon entropy with an adjustable parameter and is first introduced into decision trees in the prior work .  only tested the performance of Tsallis entropy in C4.5 with some given , but the relation between Tsallis entropy and other split criteria was not explored. And the unified framework was also not presented. In this paper, we propose a Tsallis entropy based decision tree induction algorithm called TEC algorithm and analyze the correspondence between Tsallis entropy with different and other split criteria. Shannon entropy and Gini index are just two specific cases of Tsallis entropy with and , while Gain Ratio is also can be considered as a normalized Tsallis entropy with . And Tsallis entropy indeed provides a new approach to improve the performance of decision trees with a tunable in a unified framework. Experimental results on UCI data sets indicate that the TEC algorithm achieves statistically significant improvement over the classical algorithms without losing the strengths of decision trees.
The rest of this paper is organized as follows. Section 2 presents the background of Tsallis entropy. Section 3 outlines our proposed TEC algorithm. Section 4 exhibits experimental results. Section 5 summaries the work.
Ii Tsallis entropy
Entropy is the measure of disorder in physical systems, or the measure of the amount of information that may be needed to specify the full microstates of the system . In 1948, Shannon adopted entropy to information theory, called Shannon entropy 
, which is a measure of the uncertainty associated with a random variable.
where is a random variable that can take values and
is the corresponding probabilities of. Shannon entropy is concave and attains maximum when .
There are two typical distributions observed in the macroscopic world, exponential distribution family and power-law heavy-tailed distribution family. However, we cannot characterize power-law heavy-tailed distribution through maximizing Shannon entropy subject to normal mean and variance. The reason is that Shannon entropy implicitly assumes certain trade-off between contributions from the tails and the main mass of distribution. It should be worthwhile to control this trade-off explicitly to characterize the two distribution family. Entropy measures that depend on powers of probability, , can provide such control. Thus, some parameterized entropies have been proposed. A well-known generalization of this concept is Tsallis entropy , which extends its applications to so-called non-extensive systems using an adjustable parameter . Tsallis entropy can explain some physical systems that have complex behaviours such as long-range and long-memory interactions .
Tsallis entropy is defined by:
which converges to Shannon entropy in the limit ,
The relation to Shannon entropy can be made clearer by rewriting the definition in the form:
is called the -logarithmic function. And when , .
Just like the exponential function to the logarithmic function, there is also the corresponding -exponential function to -logarithmic function.
For , Tsallis entropy is convex. For , Tsallis entropy is non-convex and non-concave. While for , Tsallis entropy is concave, satisfying similar properties to Shannon entropy . For instance, for , , and
is maximal at the uniform distribution.
Additivity is a crucial difference of the fundamental property between Shannon entropy and Tsallis entropy. For two independent random variables and , Shannon entropy has the additivity property:
however, Tsallis entropy has the pseudo-additivity (also called -additivity) property:
Besides, Tsallis conditional entropy, Tsallis joint entropy and Tsallis mutual information are also derived similarly to Shannon entropy. For the conditional probability and the joint probability , Tsallis conditional entropy and Tsallis joint entropy  are denoted by:
It is remarkable that Eq.(9) can be easily deformed by
The relation between the conditional entropy and the joint entropy is given by:
Tsallis mutual information  is denoted as the difference between Tsallis entropy and Tsallis conditional entropy:
and the chain rule of Tsallis mutual information for random variablesand holds:
In summary, Tsallis entropy generalizes Shannon entropy with an adjustable parameter and has a wider range of applications.
Iii Tsallis Entropy Criterion (TEC) algorithm
One key issue in the procedure of decision tree induction is the split criterion. At every step, the decision tree chooses one pair of attribute and cutting point which makes the maximal impurity decrease to split the data and grow the tree. Therefore, the attribute chosen to split significantly affects the construction of decision trees and further influences the classification performance.
Iii-a Tree construction
Given a data set , with attributes , and class label . For each tree node, we search for every possible pair of attribute and cutting point to choose the optimal attribute and cutting point as follows: for a attribute ,
Here is the candidate cutting point for attribute , is the data set belonging to one node to be partitioned, and , are the two child nodes that would be created if is partitioned at . The function is the impurity criterion, e.g. Tsallis entropy, which computes over the labels of the data which fall in the node. The pair of attribute and cutting point is chosen to construct the tree which maximizes .
The above procedure is applied recursively until some stopping conditions are reached. The stopping conditions consist of three principles: (i) The classification is achieved in a subset. (ii) No attributes are left for selection. (iii) The cardinality of a subset is not greater than the predefined threshold.
Once the tree has been trained by the data as a classifier, it can be used to predict for new unlabeled instances.
Decision tree makes prediction in a majority vote manner. For each class ,
where denotes the leaf containing , and denotes the number of instances that located in . Then the tree prediction is the class that maximizes this value:
Iii-C TEC algorithm
Here, we summary our proposed Tsallis Entropy Criterion (TEC) algorithm in a pseudo-code format in Algorithm 1. Compared with the classical decision tree induction algorithms, the only difference is the split criterion. We use Tsallis entropy to replace the classical split criteria, e.g. Shannon entropy, Gain Ratio and Gini index. Actually, in the following subsection, we will see that Tsallis entropy unifies Shannon entropy, Gain Ratio and Gini index with different values of .
Iii-D Relations to other criteria
As described above, Tsallis entropy unifies Shannon entropy, Gain Ratio and Gini index in a framework. In the following, we will reveal the relations between Tsallis entropy to other split criteria.
Tsallis entropy converges to Shannon entropy for :
Besides, Gini index is exactly a specific case of Tsallis entropy with :
As for the Gain Ratio which adds a normalized factor compared with Information Gain, it can be seen as the normalized Information Gain. According to the Eq.(16), we can obtain:
where represents Shannon entropy. If is replaced by Tsallis entropy, Gain Ratio is generalized to Tsallis Gain Ratio. Thus, Gain Ratio is also covered by the Tsallis entropy adding a normalized factor (Tsallis Gain Ratio) with .
In summary, Tsallis entropy unifies three kinds of split criteria, e.g. Shannon entropy, Gain Ratio and Gini index, and generalizes the split criterion of decision trees. As far as we know, this is the first time to unify common split criteria into a parametric framework. This is also the first time to reveal the correspondence between Tsallis entropy with different and other split criteria. The optimal for Tsallis entropy is obtained by cross-validation, which is usually not equal to or . This implies better performance than the traditional split criteria. Although the optimal may be different for different data sets, it is associated with the properties of data sets. That is to say, the parameter enables the TEC algorithm to have adaptability and flexibility. Tsallis entropy indeed provides a new approach to improve decision trees’ performance with a tunable in a unified framework. In the Experiments section, we will see that the TEC algorithm achieves higher accuracy than classical algorithms with an appropriate .
As illustrated in section III, the TEC algorithm is based on Tsallis entropy with an adjustable parameter which consists of Tsallis entropy and Tsallis Gain Ratio split criteria. Tsallis entropy split criterion degenerates to Shannon entropy and Gini index with and , respectively. With respect to Gain Ratio, Tsallis Gain Ratio (the normalized Tsallis entropy) also degenerates to Gain Ratio with .
Iv-a Evaluation Metric
In order to quantitatively compare trees obtained by different methods, we choose accuracy to evaluate the effectiveness of the tree and the total number of the tree nodes to measure the tree complexity.
Iv-B Data Set Description
As shown in Table I, the UCI data sets  are adopted to evaluate the proposed approaches. These data sets consist of three types, namely numeric, categorical and mixed data sets. Also, these data sets include two kinds of classification problems, binary and multi-class classification.
Iv-C Experiment Setup
The decision trees with different split criteria, e.g. Gain Ratio, Shannon entropy, Gini index, Tsallis entropy and Tsallis Gain Ratio, are implemented in Python. We refer to the CART algorithm implementation on scikit-learn platform  and the C4.5 algorithm implementation of J48 in Weka . In each data set, we first partition the data into the training set and test set randomly where the test set holds . Then in the training set, we do a grid search using 10-fold cross-validation to determine the the values of in Tsallis entropy and Tsallis Gain Ratio. Maybe the optimal for Tsallis entropy and Tsallis Gain Ratio are different, but for the fair comparison we choose the same , e.g. optimal for Tsallis entropy. Besides, the minimal leaf size is set to to avoid overfitting. After the parameter selection, the above best parameters are fixed. Then, a decision tree is trained by the training data without post-pruning and evaluated by the test data. The procedure from the training-test data partition to the evaluation is repeated 10 times to reduce the influence of randomness.
Figure 1 gives an intuitive exhibition of the influence of different values of parameter in Tsallis entropy for the Glass data set. Figure 1 (a) illustrates that the accuracy is sensitive to the change of and the highest accuracy is obtained at . Figure 1 (b) shows that the tree complexity has different responds to the change of as accuracy and the lowest tree complexity is achieved at . It should be noted that there are different strategies to choose for various purpose, e.g. highest accuracy or lowest complexity or trade-off, which is also a reflection of the TEC algorithm’s adaptability for data sets. In this paper, we choose the highest accuracy principle for the choice of .
Table II reports the accuracy and complexity results of different criteria for different data sets. The highest accuracy and lowest complexity on each data set are in boldface. As expected, the performance of TEC outperforms ID3, CART and C4.5 due to the fact that Tsallis entropy is a generalization of Shannon entropy, Gini index and Gain Ratio. In respect to the two kinds of the TEC algorithm, e.g. Tsallis entropy and Tsallis Gain Ratio, no one can prevail another one absolutely. The results indicates that Tsallis entropy prefers high accuracy while Tsallis Gain Ratio prefers low complexity. The reason lies on the normalized factor which has influence in the tree structure to some extent. In addition, compared with Shannon entropy and Gini index, Tsallis entropy achieves better performance in accuracy and complexity. Tsallis Gain Ratio also obtains better results compared with Gain Ratio. Three Wilcoxon signed ranked tests 
on accuracy (Tsallis entropy vs Shannon entropy, Tsallis entropy vs Gini index, Tsallis Gain Ratio vs Gain Ratio) all reject the null hypothesis of equal performance at a p-value less than. The results show that the TEC algorithm with appropriate achieves a average statistically significant improvement in accuracy and maintains a lower complexity.
In terms of optimal value of , we find a fuzzy trend from Table II that the more of class number, the smaller value is tended, e.g. for numeric type data sets from Yeast to Haberman, is increasing while the class number is decreasing (exception for Vehicle). In this paper, we choose the optimal value of using cross-validation method, but we conjecture that the values of is associated with the properties of data sets. For example, the Car data set, all the algorithms presents almost the same results which reflects the data set is not sensitive to the parameter . The relation between the and the properties of data sets will be discussed in the future work.
In this paper, we present and evaluate Tsallis entropy for enhancing decision trees in a fundamental issue, e.g. split criterion. We unify the classical split criteria into a parametric framework and propose the TEC algorithm with Tsallis entropy split criterion which generalizes Shannon entropy, Gain Ratio and Gini index through an adjustable parameter . Most of all, we reveal the relations between Tsallis entropy with different and other split criteria. Experimental results indicate that, with appropriate , the TEC algorithm achieves a average statistically significant improvement in accuracy. Nevertheless, the approaches have limitations that need to be addressed in the future, such as, the estimate method for parameter
in place of current cross-validation method. Furthermore, Tsallis entropy also has potential applications beyond decision trees, for instance, Random Forest and Bayesian network, to be investigated in future work.
This research is supported in part by the 973 Program of China (No. 2012CB315803), the National Natural Science Foundation of China (No. 61371078, 61375054), and the Research Fund for the Doctoral Program of Higher Education of China (No. 20130002110051).
-  J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986.
-  J. R. Quinlan, C4. 5: programs for machine learning. Morgan Kaufmann Publishers, 1993.
-  L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen, Classification and regression trees. CRC press, 1984.
-  S. Nowozin, “Improved information gain estimates for decision tree induction,” in Proceedings of the 29th International Conference on Machine Learning (ICML-12). ACM, 2012, pp. 297–304.
M. Serrurier and H. Prade, “Entropy evaluation based on confidence intervals of frequency estimates: Application to the learning of decision trees,” inProceedings of the 32nd International Conference on Machine Learning (ICML-15). ACM, 2015, pp. 1576–1584.
-  W. Buntine and T. Niblett, “A further comparison of splitting rules for decision-tree induction,” Machine Learning, vol. 8, no. 1, pp. 75–85, 1992.
-  W. Z. Liu and A. P. White, “The importance of attribute selection measures in decision tree induction,” Machine Learning, vol. 15, no. 1, pp. 25–41, 1994.
T. Maszczyk and W. Duch, “Comparison of shannon, renyi and tsallis entropy
used in decision trees,” in
Proceedings of the 17th International Conference on Artificial Intelligence and Soft Computing (ICAISC-08). Springer, 2008, pp. 643–651.
-  R. Frigg and C. Werndl, “Entropy-a guide for the perplexed,” Probabilities in physics, 2011.
-  C. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol. 27, no. 3, pp. 379–423, 1948.
-  C. Tsallis, “Possible generalization of boltzmann-gibbs statistics,” Journal of Statistical Physics, vol. 52, no. 1-2, pp. 479–487, 1988.
-  C. Tsallis, Introduction to nonextensive statistical mechanics. Springer, 2009.
-  C. Tsallis, “Generalizing what we learnt: Nonextensive statistical mechanics,” in Introduction to Nonextensive Statistical Mechanics. Springer, 2009, pp. 37–106.
-  S. Abe and A. Rajagopal, “Nonadditive conditional entropy and its significance for local realism,” Physica A: Statistical Mechanics and its Applications, vol. 289, no. 1, pp. 157–164, 2001.
-  T. Yamano, “Information theory based on nonadditive information content,” Physical Review E, vol. 63, no. 4, p. 046105, 2001.
-  M. Lichman, “UCI machine learning repository,” http://archive.ics.uci.edu/ml, 2013.
-  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
-  M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The weka data mining software: an update,” ACM SIGKDD explorations newsletter, vol. 11, no. 1, pp. 10–18, 2009.
-  J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” The Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006.