1 Introduction
Beyond its physical interpretation, distance can be generalized to quantify the notion of similarity, which puts it at the heart of many learning methods, including the Nearest Neighbors (NN) method, the
means clustering method and the kernel regressions. The conventional Euclidean distance treats all dimensions equally. With the growing complexity of modern datasets, however, Euclidean distance is no longer efficient in capturing the intrinsic similarity among individuals given a large number of heterogeneous input variables. This increasing scale of data also poses a curse of dimensionality such that, with limited sample size, the unit density of data points is largely diluted, rendering high variance and high computational cost for Euclideandistancebased learning methods. On the other hand, it is often assumed that the true informative structure with respect to the learning task is embedded within an intrinsic lowdimensional manifold
[1], on which modelfree distancebased methods, such as NN, are capable of taking advantage of the inherent structure. It is therefore desirable to construct a generalized measure of distance in a lowdimensional nonlinear feature space for improving the performance of classical distancebased learning methods when applied to complex and high dimensional data.We first consider the Mahalanobis distance as a generalization of the Euclidean distance. Let be a set of points in a feature space . The Mahalanobis distance metric parameterized by a weight matrix between any two points and is given by:
(1) 
where is symmetric positive semidefinite (PSD), denoted as
. The Mahalanobis distance can also be interpreted as the Euclidean distance between the points linearly transformed by
:(2) 
where can be found by the Cholesky Decomposition. From a general supervised learning perspective, a “good” Mahalanobis distance metric for an outcome at is supposed to draw samples with similar values closer in distance based on , referred to as the similarity objective, and to pull dissimilar samples further away, referred to as the dissimilarity objective, in the projected space.
There has been considerable research on the datadriven learning of a proper weight matrix for the Mahalanobis distance metric in the field of distance metric learning. Both accuracy and efficiency of distancebased learning methods can significantly benefit from using the Mahalanobis distance with a proper [2]. A detailed comparison with related methods is presented in Section 5. While existing algorithms for metric learning have been shown perform well across various learning tasks, each is not sufficient in dealing with some basic requirements collectively. First, a desired metric should be flexible in adapting local variations as well as capturing nonlinearity in the data. Second, in highdimensional settings, it is preferred to have a sparse and lowrank weight matrix for better generalization with noisy inputs and for increasing interpretability of the fitting model. Finally, the algorithm should be efficient in preserving all properties of a distance metric and be scalable with both sample size and the number of input variables.
In this paper, we propose a novel method for a local sparse metric in a nonlinear feature subspace for binary classification, which is referred to as sDist. Our approach constructs the weight matrix through a gradient boosting algorithm that produces a sparse and lowrank weight matrix in a stagewise manner. Nonlinear features are adaptively constructed within the boosting algorithm using a hierarchical expansion of interactions. The main and novel contribution of our approach is that we mathematically convert a global optimization problem into a sequence of simple local optimization via boosting, while efficiently guaranteeing the symmetry and the positive semidefiniteness of without resorting to the computationally intensive semidefinite programming. Instead of directly penalizing on the sparsity of , sDist imposes a sparsity regularization at each step of the boosting algorithm that builds a rankone decomposition of . The rank of the learned weight matrix is further controlled by the sparse boosting method proposed in [3]
. Hence, three important attributes of a desirable sparse distance metric are automatically guaranteed in the resulting weight matrix: positive semidefiniteness, low rank and elementwise sparisty. Moreover, our proposed algorithm is capable of learning a sparse metric on nonlinear feature space, which leads to a flexible yet highly interpretable solution. Feature selection might be carried out as a spontaneous byproduct of our algorithm that provides insights of variable importance not only marginally but also jointly in higher orders.
Our paper is organized as follows. In Section 2 we briefly illustrate the motivation for our method using a toy example. Section 3 dissects the global optimization for linear sparse metric learning into a stagewise learning via gradient boosting algorithm. Section 4 extends the framework proposed in Section 3 to the nonlinear sparse metric learning by hierarchical expansion of interactions. We summarize some related works in Section 5. Section 6 provides some practical remarks on implementing the proposed method in practice. Results from numerical experiments are presented in Section 7. Finally, Section 8 concludes this paper by summarizing our main contributions and sketching several directions of future research.
2 An Illustrative Example
Before introducing the details of the sDist algorithm, we offer here a toy example in Figure 1 to illustrate the problem of interest. The left panel of Figure 1 demonstrates the binary classification problem XOR (Exclusive OR) in a 3dimensional space, which is commonly used as a classical setting for nonlinear classification in the literature. In the original space, sample points cannot be linearly separated. In this setting, sample points with the same class label are distributed in two clusters positioned diagonally from each other. In the original space, sample points cannot be linearly separated. It is also observed that the vertical dimension is redundant, as it provides no additional information regarding the class membership aside from and . Hence, it is expected that there exists a nonlinear subspace on which points on the opposite diagonals of the tilted surface are closer to each other. Moreover, the subspace should be constructed solely based on a minimum set of variables that are informative about the class membership. The right panel of Figure 1 is the transformed subspace learned by the proposed sDist algorithm, which is only based on the informative variables and . In particular, the curved shape of the resulted surface ensures that sample points with the same class label are drawn closer and those with opposite label are pulled further apart.
3 Boosted Linear Sparse Metric Learning
In this section, we first discuss the case of learning a linear sparse metric. Extension to nonlinear metric is discussed in Section 4. Assume that we are given a dataset , , , where is the input feature space and
is the number of dimensions of the input vector
^{1}^{1}1For simplicity, we only consider datasets with numerical features in this paper, on which distances are naturally defined.. The class label . Consider an ideal scenario where there exists a metric parametrized by such that, in the transformed space, classes are separable. Then a point should, on average, be closer to the points from the same class than to the ones from the other class in its local neighborhood. Under this proposition, we propose a simple but intuitive discriminant function at between classes characterized by :(3) 
with
where and are the set of nearest neighbors of with the same class labels and with the opposite class labels as , respectively. Without any prior information, the local neighborhoods are first identified using the Euclidean distance ^{2}^{2}2In Section 6, we introduce a practical solution that updates local neighborhoods regularly as the boosting algorithm proceeds.. When the domain knowledge of local similarity relationships are available, local neighborhoods can be constructed with better precision. The predicted class label is obtained by if and otherwise. For simplicity, we drop in the notations , , and as is fixed throughout the algorithm.
The base classifier in (
3) serves as a continuous surrogate function of the NN classifier, which is differentiable with respect to the weight matrix . Instead of using the counts of the negative and the positive sample points in local neighborhoods, we adopt the continuous value of distances between two class to indicate the local affinity to the negative and the positive classes. Detailed comparison of the performance of the proposed classifier (3) with the NN classifier at different values of can be found in the Appendix A. It is shown that in (3) achieves lower test error with small values of that is commonly used in the neighborhoodbased methods. Furthermore, as we will show in the following, the differentiability of enables smooth optimization on which facilitates a faster and more stable learning algorithm.Alternatively, can be represented as an inner product between the weight matrix and the data information matrix , defined below, which contains all information of training sample point for classification:
(4) 
where
and stands for the inner product for vectorized matrices. Since the matrics ’s can be precalcuated without the intervention of , this alternative formulation of suggests a computationally efficient optimization of while keeping ’s fixed.
For learning , we evaluate the performance of the classifier using the exponential loss, which is commonly used as a smooth objective function in binary classification:
(5) 
Our learning task is then translated to derive a weight matrix
on the original feature space that minimizes the loss function in (
5). The optimization of this objective function, however, is generally intractable for high dimensional data. Our proposed method, sDist, seeks solution in minimizing objective function via optimizing adaptable subproblems such that a feasible solution can be achieved. In short, the building block of sDist are: a gradient boosting algorithm which learns a rankone update of the weight matrix at each step; a sparsity regularization on each rankone update to enforce the elementwise sparsity and while preserving the positive semidefiniteness simultaneously, and a sparse boosting criterion that controls the total number of boosting steps to achieve overall sparsity and low rank of the resulting weight matrix.3.1 Metric Learning via Boosting
In the distance metric learning literature, much effort has been put forward to learn the weight matrix by solving a single optimization problem globally, as in [4] and [5]. However, the optimization turns out to be either computationally intractable or susceptible to local optima with noisy highdimensional inputs.
Boosting [6] offers a stagewise alternative to a single complex optimization problem. The motivation for boosting is that one can use a sequence of small improvements to derive a better global solution. Under the classification setting, boosting combines the outputs of many weak learners trained sequentially to produce a final aggregated classifier. Here, a weak learner is a classifier that is constructed to be only modestly better than a random guess. Subsequent weak learners are trained with more weights on previously misclassified cases, which reduces dependence among the trained learners and produces a final learner that is both stable and accurate. Such an ensemble of weak learners has been proven to be more powerful than a single complex classifier and has better generalization performance [7]. In [8] and [9], a boosting algorithm has been implemented for learning a full distance metric, which has motivated the proposed algorithm in this paper. Their important theorem on traceone semidefinite matrices is central to the theoretical basis of our approach.
Adopting a boosting scheme, sDist is proposed to learn a weight matrix in a stepwise fashion to avoid overfitting to the training data in one optimization process. To construct the gradient boosting algorithm, we first decompose the learning problem into a sequence of weak learners. It is shown in [8] that for any symmetric positive semidefinite matrix with trace one, it can be decomposed into a linear convex span of symmetric positive semidefinite rankone matrices:
(6) 
where , , and . We define the vector of weights . The parameter is the number of boosting iterations. Since any symmetric rankone matrix can be written as an outer product of a vector to itself. We further decompose as
(7) 
Based on the decomposition in (7), we propose a gradient boosting algorithm that, within each step , learns a rankone matrix and its nonnegative weight . Each learned can be considered as a small transformation of the feature space in terms of scaling and rotation. We use the following base learner in the gradient boosting algorithm:
(8) 
In consecutive boosting steps, the target discriminant function is constructed as a stagewise additive expansion. At the step, the aggregated discriminant function is updated by adding the base learner with weight to the existing classifier with weight matrix that is learned from the previous steps:
where the resulting composite is shown to be a weighted sum of ’s learned from all previous steps. Therefore, the rankone matrices obtained at each boosting step are assembled to construct the desired weight matrix, reversing the decomposition in (7). In this process, the required symmetry and positive semidefiniteness of weight matrix are automatically preserved without imposing any constraint. Moreover, the number of total boosting steps caps the rank of the final weight matrix. Thus, we can achieve an optimal reduced rank distance metric by using an appropriate , which is discussed in Section 3.3.
In the gradient boosting algorithm, the learning goal is to attain the minimum of the loss function in (5). It is achieved by adapting a steepestdescent minimization in the functional space of in (3), which is characterized by the weight matrix . The optimization problem in each boosting step is divided into two substeps, for :

Finding the rankone matrix given the previous aggregation . The residuals from the previous steps are:
(9) for . The subsequent rankone matrix is obtained by minimizing the loss function on the current residuals for a new weak learner in (8), that is,
(10) Since
the objective of (10) is equivalent to identifying
(11) and rankone update of weight matrix is calculated as .
However, (11) is nonconvex and suffers from local minima and instability. Instead of pursuing the direct optimization on the objective function in (11), we resort to an approximation of it by the first order Taylor expansion, which is commonly used in optimizing nonconvex exponential objective functions. It allow us to take advantage of the exponential loss in the binary classification task as well as avoid the expensive computational cost of considering a higher order of expansion. This approximation results in a simpler convex minimization problem :
(12) where . It is worthnoting that solving (12
) is equivalent to computing the the eigenvector associated with the largest eigenvalue of
via eigendecomposition.
At last, the weight matrix is updated by
(14) 
The full algorithm is summarized in Algorithm 1 in Section 4.
3.2 Sparse Learning and Feature Selection
In the current literature of sparse distance metric learning, a penalty of sparsity is usually imposed on the columns of the weight matrix or , which is inefficient in achieving both elementwise sparsity and low rank in the resulting
. For instance, Sparse Metric Learning via Linear Programming (SMLlp)
[11] is able to obtain a lowrank but the resulting is dense, rendering it not applicable to highdimensional datasets and being lack of feature interpretability. Other methods, such as Sparse Metric Learning via Smooth Optimization (SMLsm) [12], cannot preserve the positive semidefiniteness of while imposing constraints for elementwise sparsity and reduced rank. These methods often rely on the computationally intensive projection to the positivesemidefinite cone to preserve the positive semidefiniteness of in their optimization steps. With the rankone decomposition of , we achieve elementwise sparsity and low rank of the resulting weight matrix simultaneously by regularizing both at each boosting step and the total number of boosting steps .First, we enforce the elementwise sparsity by penalizing on the norm of . This measure not only renders a sparse linear transformation of the input space but also select a small subset of features relevant to the class difference as output at each step. The optimization in (12) is replaced by a penalized minimization problem:
(15) 
where is the regularizing parameter on .
As pointed out in Section 3.1, (12) can be solved as a eigendecomposition problem. The optimization problem in (15), appended with a single sparsity constraint on the eigenvector associated with the largest eigenvalue, is shown in [13] as a sparse eigenvalue problem. We adopt a simple yet effective solution of the truncated iterative power method introduced in [13] for obtaining the largest sparse eigenvectors with at most nonzero entries. Power methods provide a scalable solution for obtaining the largest eigenvalue and the corresponding eigenvector of highdimensional matrices without using the computationally intensive matrix decomposition. The truncated power iteration applies the hardthresholding shrinkage method on the largest eigenvector of , which is summarized in Algorithm 2 in Appendix B.
Using parameter in the sparse eigenvalue problem spares the effort of tuning the regularizing parameter indefinitely to achieve the desirable level of sparsity. Under the context of sDist, indeed controls the level of module effect among input variables, namely, the joint effect of selected variables on the class membership. Inputs that are marginally insignificant can have substantial influence when joined with others. The very nature of the truncated iterative power method enables us to identify informative variables in groups within each step. These variables are very likely to constitute influential interaction terms that explain the underlying structure of decision boundary which are hard to discern marginally. This characteristic is deliberately utilized in the construction of nonlinear feature mapping adaptively, which is discussed in detail in Section 4. In practice, the value of can be chosen based on domain knowledge, depending on the order of potential interactions among variables in the application. Otherwise, we use crossvalidation to select the ratio between and the number of features , denoted as , at each boosting step as it is often assumed that the number of significant features is relatively proportional to the total number of features in real applications.
3.3 Sparse Boosting
The number of boosting steps , or equivalently the number of rankone matrices, bounds the overall sparsity and the rank of resulted weight matrix. Without controlling over from infinitely large, the resulted metric may fail to capture the lowdimensional informative representation of the input variable space. Fitting with infinitely many weak learners without regularization will produce an overcomplicated model that causes overfitting and poor generalization performance. Hence, in addition to sparsity control over , we incorporate an automatic selection of the number of weak learners into the boosting algorithm by formulating it as an optimization problem. This optimization imposes a further regularization on the weight matrix to enforce a lowrank structure. Therefore, the resulting is ensured to have reduced rank if the true signal lies in a low dimensional subspace as well as guaranteeing the overall elementwise sparsity.
To introduce the sparse boosting for choosing an , we first rewrite the aggregated discriminant function at the step as a hat operator , mapping the original feature space to the reduced and transformed space, i.e., , in which is the transformed space by , . Therefore, we have
Here is uniquely defined by the positive semidefiniteness of . Hence, we define the complexity measure of the boosting process at the
step by the generalized definition of degrees of freedom in
[14]:(16) 
With the complexity measure in (16), we adopt the sparse boosting strategy introduced in [3]. First, let the process carry on for a large number, , of iterations; then the optimal stopping time is the minimizer of the stopping criterion
(17) 
where is the regularizing parameter for the overall complexity of .
This objective is rather intuitive: ’s are learned as sparse vectors and thus has nonzero entries mostly on the diagonal at variables selected in . Therefore, is a good approximation of the number of selected variables, which explicitly indicates the level of complexity of the transformed space at step .
4 Boosted Nonlinear Sparse Metric Learning
The classifier defined in (3) works well only when the signal of class membership is inherited within a linear transformation of the original feature space, which is rarely the case in practice. In this section, we introduce nonlinearity in metric learning by learning a weight matrix on a nonlinear feature mapping of the input variable space , where . The nonlinear discriminant function is defined as
(18) 
where
Learning a “good” feature mapping in the infinitedimensional nonlinear feature space is infeasible. In [15], Torresani and Lee resort to the “kernel” trick and construct the Mahalanobis distance metric on the basis expansion of kernel functions in Reproducing Kernel Hilbert Space. Taking a different route, Kedem et al [16] abort the reliance on the Mahalanobis distance metric and learn a distance metric on the nonlinear basis functions constructed by regression trees. Although these methods provide easytouse “black box” algorithms that offers extensive flexibility in modeling a nonlinear manifold, they are sensitive to the choices of model parameters and are subject to the risk of overfitting. The superfluous set of basis functions also hinders the interpretability of the resulting metric model with respect to the relevant factors of class separation.
In this paper, we restrict the feature mapping to the space of polynomial functions of the original input variables . The construction of nonlinear features is tightly incorporated within the boosted metric learning algorithm introduced in Section 3. Accordingly, a proper metric is learned in concert with the building of essential nonlinear mappings suggested in the data.
We initialized as the identity mapping at step . In the following steps, based on the optimal sparse vector learned from the regularized optimization problem (15), we expand the feature space by only including interaction terms and polynomial terms among the nonzero entries of , that is, the selected features. Such strategy allows the boosting algorithm to benefit from the flexibility introduced by the polynomials without running into overwhelming computational burden and storage need. In comparison, the full polynomial expansion results in formidable increase in dimensionality of the information matrices ’s to as much as .
The polynomial feature mapping also permits selection of significant nonlinear features. Kernel methods are often preferred in nonlinear classification problems due to its flexible infinitedimensional basis functions. However, for the purpose of achieving sparse weight matrix, each basis function need to be evaluated for making the selection toward a sparse solution. Hence, using kernel methods in such a case is computationally infeasible due to its infinite dimensionality of basis functions. By adaptively expanding polynomial features, optimizing (15) on the expanded feature space is able to identify not only significant input variables but also informative interaction terms and polynomial terms.
Before we layout the details of the adaptive feature expansion algorithm, we define the following notions: Let be the set of candidate variables at step , where represents the candidate feature, and is the cardinality of the set , that is, the number of features at step . The set includes the entire set of original variables as well as the appended interaction terms. Denote as the cumulative set of the unique variables selected up to step , and be the set of variables being newly selected in step . Then,

: Set , the set of the original variables.

: Select by the regularized optimization in (15) with prespecifed .
where the operator “” is defined as

, : Select . Then
(20)
Then at the step of the algorithm is defined as , the vector^{3}^{3}3Here . When is a set of variable or interactions of variables, represents the columns of (or products of columns of ) listed in . whose components are elements in
It is worthnoting that, in updating , there is no need to compute the entire matrix, the cost of which is on the order of . Instead, taking advantage of the existing , it is only required to add rows of pairwise products between the newly added terms and currently selected ones and to make the resulting matrix symmetric. The extra computational cost is reduced to and when is large. Therefore, the method of expanding the feature space in the stepwise manner is tractable even with large . Since we only increase the dimension of feature space by a degree less than at each step with controlled by the sparse boosting, the proposed hierarchical expansion is computationally feasible even with highdimensional input data.
We integrate the adaptive feature expansion for nonlinear metric learning into the boosted sparse metric learning algorithm in Section 3. The final algorithm is summarized in Algorithm 1. The details of how to choose the value of parameters , and are elaborated in Section 6
(21) 
is the zero matrix of dimension
by .5 Related Work
There is an extensive literature devoted on the problem of learning a proper for the Mahalanobis distance. In this paper, we focus on the problem of supervised metric learning for classification in which class labels are given in the training sample. In the following, we categorize related methods in the literature into four groups: 1) global metric learning, 2) local metric learning, 3) sparse metric learning, and 4) nonlinear metric learning.
Global metric learning aims to learn a that addresses the similarity and dissimilarity objectives at all sample points. Probability Global Distance Metric (PGDM) learning [4] is an early representative method of this group. In PGDM, the class label () is converted into pairwise constraints on the metric values between pairs of data points in the feature () space: equivalence (similarity) constraints that similar pairs (in ) should be close (in ) by the learned metric; and inequivalence (dissimilarity) constraints that dissimilar ones (in ) should be far away (in ). The distance metric is then derived to minimize the sum of squared distances between data points with the equivalence constraints, while maintaining a lower bound for the ones with the inequivalence constraints. The global optimum for this convex optimization problem is derived using SemiDefinite Programming (SDP). However, the standard SDP by the interior point method requires storage and has a worstcase computational complexity of approximately , rendering it computationally prohibitive for large . Flexible Metric Nearest Neighbor (FMNN) [17]
is another method of this group, which, instead, adapts a probability framework for learning a distance metric with global optimality. It assumes a logistic regression model in estimating the probability for pairs of observations being similar or dissimilar based on the learned metric, yet suffering poor scalability as well.
The second group of methods, local metric learning methods, learn by pursuing similarity objective within the local neighborhoods of observations and a large margin at the boundaries between different classes. For examples, see the Neighborhood Component Analysis (NCA) [18] and the Large Margin Nearest Neighbor (LMNN) [5]. NCA learns a distance metric by stochastically maximizing the probability of correct classassignment in the space transformed by
. The probability is estimated locally by the LeaveOneOut (LOO) kernel density estimation with a distancebased kernel. LMNN, on the other hand, learns
deterministically by maximizing the margin at class boundary in local neighborhoods. Adapting the idea of PGDM while focusing on local structures, it penalizes on small margins in distance from the query point to its similar neighbors using a hinge loss. It has been shown in [5] that LMNN delivers the stateoftheart performance among most distance metric learning algorithms. Despite its good performance, LMNN and its extensions suffers from high computational cost due to their reliance on SDP similar to PGDM. Therefore, they always require data preprocessing for dimension reduction, using adhoctools, such as the Principal Component Analysis (PCA), when applied to highdimensional data. A survey paper
[2] provides a more thorough treatment on learning a linear and dense distance metric, especially from the aspect of optimization.When the dimension of data increases, learning a full distance metric becomes extremely computationally expensive and may easily run into overfitting with noisy inputs. It is expected that a sparse distance matrix would produce a better generalization performance than its dense counterparts and afford a much faster and efficient distance calculation. Sparse metric learning is motivated by the demand of learning appropriate distance measures in highdimensional space and can also lead to supervised dimension reduction. In the sparse metric learning literature, sparsity regularization can be introduced in three different ways: on the rank of for learning a lowrank , (e.g., [15], [19], [11], [20]), on the elements of for learning an elementwise sparse [21], and the combination of the two [12]. All these current strategies suffer from various limitations and computational challenges. First, a lowrank is not necessarily sparse. Methods such as [11] impose penalty on the trace norm of as the proxy of the nonconvex nondifferentiable rank function, which usually involves heavy computation and approximation in maintaining both the status of low rank and the positive semidefiniteness of . Searching for an elementwise sparse solution as in [21] places the penalty on the offdiagonal elements of . Again, the PSD of the resulting sparse is hard to maintain in a computationally efficient way. Based on the framework of LMNN, Ying et al. [12] combine the first two strategies and penalize on the norm^{4}^{4}4The norm of is given by: [12] of to regularize the number of nonzero columns in . Huang et al. [22] proposed a general framework of sparse metric learning. It adapts several well recognized sparse metric learning methods with a common form of sparsity regularization , where varies among methods serving different purposes. As a limitation of the regularization, it is hard to impose further constraint on to guarantee PSD in the learned metric.
As suggested in (2), the Mahalanobis distance metric implies a linear transformation of the original feature space. This linearity inherently limits the applicability of distance metric learning in discovering the potentially nonlinear decision boundaries. It is also common that some variables are relevant to the learning task only through interactions with others. As a result, linear metric learning is at the risk of ignoring useful information carried by the features beyond the marginal distributional differences between classes. Nonlinear metric learning identifies a Mahalanobis distance metric on a nonlinear mappings of the input variables, introducing nonlinearity via welldesigned basis functions on which the distances are computed. Large Margin Component Analysis (LMCA )[15] maps the input variables onto a highdimensional feature space by a nonlinear map , which is restricted to the eigenfunctions of a Reproducing Kernel Hilbert Space (RKHS) [23]. Then the learning objective is carried out using the “kernel trick” without explicitly compute the inner product. LMCA involves optimizing over a nonconvex objective function and is slow in convergence. Such heavy computation limits its scalability to relatively large datasets. Kedem et al. [16] introduce two methods for nonlinear metric learning, both of which derived from extending LMNN. LMNN uses a nonlinear distances for learning a distance metric for histogram data. The other method, GBLMNN, exploits the gradient boosting algorithm that learns regression trees as the nonlinear basis functions. GBLMNN relies on the Euclidean distance in the nonlinearly expanded features space without an explicit weight matrix . This limits the interpretability of its results. Current methods in nonlinear metric learning are mostly based on blackbox algorithms which are prone to overfit and have limited interpretability of variables.
6 Practical Remarks
When implementing Algorithm 1 in practice, the performance of the sDist algorithm can be further improved in terms of both accuracy and computational efficiency by a few practical techniques, including local neighborhood updates, shrinkage, bagging and feature subsampling. We numerically evaluate the effect of the following parameters on a synthetic dataset in Section 7.
As stated in Section 3, the base classifier in (3) is constructed based on local neighborhoods. Without additional domain knowledge about the local similarity structure, we search for local neighbors of each sample point using the Euclidean distance. While the actual neighbors found in the truly informative feature subspace may not be well approximated by the neighbors found in the Euclidean space of all features, the learned distance metrics in the process of the boosting algorithm can be used to construct a better approximation of the true local neighborhoods. The revised local neighborhoods are helpful in preventing the learned metric from overfitting to the neighborhoods found in the Euclidean distance and thus reducing overfitting to the training samples. In practice, we update local neighborhoods using the learned metric at a number of steps in the booting algorithm. The frequency of the local neighborhood updates is determined by the tradeoff between the predictive accuracy and the computational cost for recomputing distances between pairs of sample points. The actual value of updating frequency varies in real data applications and can be tuned by crossvalidation.
In addition to the sparse boosting in which the number of boosting steps is controlled, we can further regularize the learning process by imposing a shrinkage on the rankone update at each boosting step. The contribution of is scaled by a factor when it is added to the current weight matrix . That is, step 2g in Algorithm 1 is replaced by
(22) 
The parameter can be regarded as controlling the learning rate of the boosting procedure. Such a shrinkage helps in circumventing the case that individual rankone updates of the weight matrix fit too closely to the training samples. It has been empirically shown that smaller values of favor better generalization performance and require correspondingly larger values of [24]. In practice, we use crossvalidation to determine the value of .
Bootstrap Aggregating (bagging) has been demonstrated to improve the performance of a noisy classifier by averaging over weakly correlated classifiers [7]. Correlations between classifiers are diminished by random subsampling. In the gradient boosting algorithm of , we use the same technique of randomly sampling a fraction ^{5}^{5}5The parameter is referred as the “bagging fraction” in the following., , of the training observations to build each weak learner for learning the rankone update. This idea has been well exploited in [25] with tree classifiers, and it is shown that both accuracy and execution speed of the gradient boosting can be substantially improved by incorporating randomization into the procedure. The value of is usually taken to be 0.5 or smaller if the sample size is large, which is tuned by crossvalidation in our numerical experiments. In particular to our algorithm, bagging substantially reduces the training set size for individual rankone updates so that can be computed on the fly more quickly without being precalculated, avoiding the need of computational memory. As a result, in applications with large sample sizes, bagging not only benefits the test error but also improves computational efficiency.
In highdimensional applications, it is likely that the input variables are correlated, which translates to high variance in the estimation. As sDist
can be viewed as learning an ensemble of nonlinear classifiers, high correlation among features can deteriorate the performance of the aggregated classifier. To resolve it, we employ the same strategy as in random forests
[26] of random subsampling on features to reduce the correlation among weak learners without greatly increasing the variance. At each boosting step , we randomly select a subset of features of size from the candidate set , where , on which ’s is constructed with dimension . The optimization in (15) is then executed on a much smaller scale and select significant features from the random subset. As with bagging, feature subsampling enables fast computation of ’s without precalculation. We use at the boosting step, which is suggested in [26]. Although feature subsampling will reduce the chance of selecting the significant features at each boosting step, it shall be emphasized that bagging on training samples and feature subsampling should be accompanied by shrinkage and thus more boosting steps correspondingly. It is shown in [7] that subsampling without shrinkage leads to poor performance in test samples. With sufficient number of boosting steps, the algorithm manages to identify many informative features without including a dominant number of irrelevant ones. While the actual value of depends on the applications, in general we suggest a large value of in order to cover most of the informative features in the random subsampling. Since the computational complexity of the proposed algorithm is linearly scalable in the number of boosting steps while quadratic in the feature dimension , feature subsampling is more computationally efficient even with large . Hence, in highdimensional setting, reducing the dimension of feature set to makes the algorithm substantially faster. Moreover, via feature subsampling, the resulting weight matrix has much less complexity measure defined in (16) as compared to the one without feature subsampling at each boosting step. As the sparse boosting approach optimizes over a tradeoff between prediction accuracy and the complexity of the weight matrix, the resulting would still be sufficiently sparse. Therefore, the feature subsampling with large number of boosting steps does not contradict with the goal of searching for sparse solutions.However, there is no rule of thumb for choosing the value of in advance. Since each application has different underlying structure of its significant feature subspace as well as involving with different level of noise, the actual value of varies case by case. In general, we suggest a large number of , from 500 to 2000, that is proportional the number of features . When feature subsampling is applied, should be increase in an order of . Since the sparse boosting process is implemented, overfitting is effectively controlled even with large and thus it is recommended to start with considerably large value of . Otherwise, we use crossvalidation to evaluate different choices of ’s.
7 Numerical Experiments
In this section, we present both simulation studies and realdata applications to evaluate the performance of the proposed algorithm. The algorithm is implemented with the following specifications. We use 5fold crossvalidations to determine the degree of sparsity for each rankone update , choosing from candidate values . The same crossvalidation is also applied to the tune overall complexity regularizing parameter . In order to control the computation cost and to ensure interpretability of the selected variables and polynomial features, we impose an upper limit on the maximum order of polynomial of the expanded features. That is, when the polynomial has an order greater than a cap value, we stop adding it to the candidate feature set. For our experiments, the cap order is set to be 4. Namely, we expect to see maximally fourway interactions. The total number of boosting steps is set to be 2000 for all simulation experiments. While by sparse boosting, the actual numbers of weak learners used vary from case to case. Throughout the numerical experiments, the reported test errors are estimated using the Nearest Neighbor classifier with under the tuned parameter configuration.
The performance of is compared with several other distance metric learning methods, with the Nearest Neighbor (NN) representing the baseline method with no metric learning, Probability Global Distance Metric (PGDM)[4], Large Margin Nearest Neighbor (LMNN) [5], Sparse Metric Learning via Linear Programming (SMLlp) [11], and Sparse Metric Learning via Smooth Optimization (SMLsm) [12]. PGDM ^{6}^{6}6Source of Matlab codes: http://www.cs.cmu.edu/%7Eepxing/papers/Old_papers/code_Metric_online.tar.gz [4] is a global distance metric learning method that solves the optimization problem:
s.t. 
LMNN ^{7}^{7}7Source of Matlab codes: http://www.cse.wustl.edu/~kilian/code/code.html learns the weight matrix by maximizing the margin between classes in local neighborhoods with a semidefinite programming. That is, is obtained by solving:
s.t. 
where ’s are slack variables and . In the experiments, we use as suggested in [5]. SMLlp aims at learning a low rank weight matrix by optimizing over the linear projection with in (2):
s.t. 
where . In a similar manner, SMLsm^{8}^{8}8Source of Matlab codes: http://www.albany.edu/~yy298919/software.html learns a lowrank weight matrix by employing a norm on the weight matrix to enforce columnwise sparsity. It is cast into the minimization problem:
s.t. 
where is the set of dimensional orthonormal matrices.
The effectiveness of distance metric learning in highdimensional datasets heavily depends on the computational complexity of the learning method. PGDM deploys a semidefinite programming in the optimization for which is in the order of for each gradient update. LMNN requires a computation complexity of for optimization. SMLsm converges in , where is the stopping criterion for convergence. In comparison, runs with a computational complexity of approximately where is the number of boosting iterations and is the number of nonzero entries in rankone updates. In practice, can be significantly accelerated by applying the modifications in the Section 5, in which is substituted by and is substituted by .
We construct two simulations settings that are commonly used as classical examples for nonlinear classification problems in the literature, the “double ring” case and the “XOR” case. In Figure 2, the left most column of the figures indicates the contour plots of high class probability for generating sample points in a 3dimensional surface, whereas the nput variable space is expanded to a much greater space of , where irrelevant input variables represent pure noises. Figure 2 (top row) shows a simulation study in which sample points with opposite class labels interwine in a double rings, and Figure 2 (bottom row) borrows the illustrative example of “XOR” classification in Section 2. The columns 24 in Figure 2 illustrate the transformed subspaces learned by algorithm at selected iterations. Since the optimal number of iterations is not static and due to the space limit, we show only the first iteration, the last iteration determined by sparse boosting, and the middle iteration, which is rounded half of the optimal number of iterations. It is clearly shown in Figure 2 that the surfaces transformed by the learned distance metric correctly capture the structures of the generative distributions. In the “double ring” example (top row), the learned surface sinks in the center of the plane while the rim bends upward so that sample points in the “outer ring” are drawn closer in the transformed surface. The particular shape owes to the quadratic polynomial of the two informative variables chosen in constructing , shown as the parabola in crosssectional grid lines. In the “XOR” example (bottom row), the diagonal corners are curved toward the same directions. The interaction between the two informative variables is selected in additional to original input variable, which is essential in describing this particular crossing nonlinear decision boundary. sDist also proves highly computationally efficient, achieving approximate optimality within a few iterations.
We also compare the performance of with other metric learning methods under different values of dimensions and sample sizes to demonstrate its scalability and its strength in obtaining essentially sparse solution in highdimensional datasets. In this case, we generate the sample points from the “double ring ” example and the “XOR” example with the numbers of informative variables being 10% of the total dimensions, ranging from 100 to 5000. The results of these two cases are shown in Table 1 and Table 2 respectively. It is noted that achieves relatively low test errors as compared to the competing methods, especially in high dimensional settings. sDist is also proved to be scalable to datasets with large sample sizes and with highdimensional inputs.
“Double Ring” Scenario  

N=100  N=500  N=5000  
p  50  500  1000  50  500  1000  50  500  1000 
NN  0.310  0.40  0.488  0.311  0.426  0.478  0.308  0.475  0.489 
PGDM  0.320  0.355  0.389  0.312  0.356  0.377  0.337  0.340  0.412 
LMNN  0.230  0.280  0.290  0.245  0.291  0.289  0.246  0.303  0.315 
SMLsm  0.222  0.289  0.250  0.169  0.200  0.249  0.199  0.276  0.330 
sDist  0.143  0.189  0.192  0.177  0.183  0.191  0.168  0.179  0.202 
Bayes Rate  0.130  0.150  0.160  0.154  0.156  0.144  0.160  0.154  0.156 
“XOR” Scenario  

N=100  N=500  N=5000  
p  50  500  1000  50  500  1000  50  500  1000 
NN  0.355  0.410  0.491  0.420  0.446  0.499  0.397  0.500  0.500 
PGDM  0.221  0.355  0.383  0.289  0.356  0.360  0.354  0.350  0.403 
LMNN  0.145  0.280  0.274  0.188  0.213  0.239  0.198  0.231  0.299 
SMLsm  0.207  0.307  0.333  0.277  0.291  0.337  0.242  0.378  0.420 
sDist  0.157  0.199  0.192  0.169  0.183  0.225  0.193  0.187  0.221 
Bayes Rate  0.130  0.160  0.160  0.133  0.177  0.181  0.155  0.144  0.138 
The performance of sDist is also evaluated on three public datasets, presented in Table 3. For each dataset, we randomly split the original data into a 70% training set and a 30% testing set, and repeat for 20 times. Parameter values are tuned by crossvalidation similarly as the simulation studies. The reported test errors in Table 3 are the averages over 20 random splits on the datasets. The reported running times are the average CPU times for one execution^{9}^{9}9Running time of sDist for datasets ionosphere, SECOM, Madelon are based on , 500, and 500 respectively with the configurations that achieve the best predictive performance. The algorithm is implemented on R (version ) on x86_64 Redhat Linux GNU system. Other competing algorithms are implemented on Matlab () on the same operating system.. We also obtain the average percentage of features selected by various sparse metric learning methods in Figure 3.
Data Statistics  Ionosphere  SECOM  Madelon  

# Inputs  33  590  500  
# Instances  351  1567  2600  
Test Error  Running Time (sec)  Test Error  Running Time (sec)  Test Error  Running Time (sec)  
NN  0.13  0.01  0.14  2.07  0.46  9.05 
PGDM  0.07  37.80  0.09  960.47  0.31  2527.82 
LMNN  0.06  20.06  0.08  1960.94  0.39  1323.64 
SMLsm  0.09  173.19  0.09  1293.97  0.41  2993.97 
sDist  0.05  27.49  0.07  473.07  0.09  689.64 
We first compare various distance metric learning methods on the Iononsphere dataset [27] ^{10}^{10}10Available at https://archive.ics.uci.edu/ml/datasets/Ionosphere. This radar dataset represents a typical small dataset. It contains mixed data types, which poses a challenge to most of the distancebased classifiers. From Table 3, we see that sDist and other metric learning methods significantly reduce the test errors by learning a nonlinear transformation of the input space, as compared to the ordinary NN classifier. sDist particularly achieves the best performance by screening out a large proportion of noises. The marginal features selected by different methods are compared in Figure 3. Features selected by sDist are mostly interactions within a single group of variables, suggesting an interesting underlying structure of the data for better interpretation.
SECOM [27] ^{11}^{11}11 The data is available at https://archive.ics.uci.edu/ml/datasets/SECOM. The original data is trimmed by taking out variables with constant values and variables with more than 10% of missing values so that the dimension is reduced from 591 to 414. Observations with missing value after the trimming on variables are discarded in this experiment, which reduces the sample size to 1436. contains measurements from sensors for monitoring the function of a modern semiconductor manufacturing process. This dataset is a representative realdata application in which not all input variables are equally valuable. The measured signals from the sensors contain irrelevant information and high noise which mask the true information from being discovered. Under such scenario, accurate feature selection methods are proven to be effective in reducing test error significantly as well as identifying the most relevant signals [27]. As shown in Table 3, sDist ^{12}^{12}12Due to the heterogeneity in the input variables, we standardized the input variable matrix before implementing the sDist algorithm. In the nonlinear expansions, selected interaction terms are also scaled before being added to the candidate set . demonstrates dominant performance over the other three methods with an improvement about 33% over the original NN using the Euclidean distance. As compared to SMLsm, another sparse metric learning method, sDist shows much better scalability with a large number of input variables in terms of CPU time.
MADELON is an artificial dataset used in the NIPS 2003 challenge on feature selection ^{13}^{13}13 The data is available at https://archive.ics.uci.edu/ml/datasets/Madelon. We use both the train data and the validation data. The 5fold crossvalidation is performed on the combined dataset. [27] [28][29]. It contains sample points with binary labels that are clustered on the vertices of a five dimensional hypercube, in which these five dimensions constitute 5 informative variables. Fifteen linear combinations of these five variables were added to form a set of 20 (redundant) informative variables while the other 480 variables have no predictive power on class label. In Table 3, sDist shows excellent performance compared to the other competing methods in terms of both predictive accuracy and computational efficiency. The test error achieved by sDist also outperforms statesoftheart methods beyond the distance metric learning literature on the Madelon dataset [30] [31] [32]. sDist also attains the sparsest solution as shown in Figure 3, with 15.2% of features selected in the final weight matrix. Its outstanding performance indicates the importance of learning the lowdimensional manifold in highdimensional data, particularly for the cases with low signaltonoise ratio.
We also experimented different configurations of the tuning parameters introduced in the algorithm and the practical remarks on the Madelon dataset, including the frequency of local neighborhood updates, bagging fraction , and the degree of sparsity for rankone updates . The performances in terms of both training error and validation error are shown in Figure 4 for both the NN classifier and the base classifier defined in (3). Particularly, it is evident that updating neighborhood more frequently seems to reduce validation error. The gain in performance diminishes as the frequency increases beyond a certain level. In practice, we suggest updating the local neighborhood every 50 steps as a tradeoff between the accuracy and the computational cost. In this example, the best performances of both classifiers are achieved at the bagging fraction 0.3 or 0.5 when the degree of sparsity is small. While as is large, the errors monotonically decrease as the bagging fraction increases. In practice, we suggest a bagging fraction 0.5 for moderatesize datasets and 0.3 for large datasets. When the true informative subspace is of relatively lowdimensional, as in the case of the Madelon dataset, both training errors and validation errors are reduced with small values of . Sparse rankone updates benefit the most from the boosting algorithm for progressive learning and prevent overfitting at each single step, while in other cases, the optimal value of depends on the underlying sparsity structure.
8 Conclusions
In this paper, we propose an adaptive learning algorithm for finding a sparse Mahalanobis distance metric on a nonlinear feature space via a gradient boosting algorithm for binary classification. We especially introduced sparsity control that results in automatic feature selection. The sDist framework can be further extended in several directions. First, our framework can be generalized to multiclass problems. The base discriminant function in (3
) can be extended for a multiclass response variable in a similar fashion as in
[33] for multiclass AdaBoost. More specifically, the class label is recoded as a dimensional vector with being the number of classes. Here if and otherwise. Then a natural generalization of loss function in (5) is given by:The other way is to redefine the local positive/negative neighborhood as the local similar/dissimilar neighborhood as in [5], where the similar points refer to sample points with the same class label and the dissimilar ones are those with different class labels. However, a rigorous discussion on the extension to muticlass problems requires comprehensive analysis. It is not entirely straightforward in how to exactly address multiclass labels in metric learning, or whether the learning goal is to determine a common metric for all classes or to construct different metrics between pairs of classes. Due to the limited scope of this paper, we will leave these questions in future studies.
Furthermore, in the proposed algorithm, we approached the fitting of nonlinear decision boundary through interaction expansion and local neighborhoods. It has been noted that distance measures have close connections with kernel functions, which is commonly used for nonlinear learning methods in the literature. Integrating the nonlinear distance metric learning with kernel methods will lead to more flexible and powerful classifiers.
Appendix A Comparison between the Classifier in (3) and the NN Classifier
The classifier in (3) can be considered as a continuous surrogate function of the Nearest Neighbor classifier, which is differentiable with respect to . In Figure 5
, we show the performance of the kNN classifier and
in (3) at different values of on the real dataset Ionosphere. It suggests that, with small ( which is normally used in neighborhoodbased method, consistently outperforms NN classifier with aligned pattern in terms of the average test errors based on 20 randomly partitioned crossvalidations.Appendix B Truncated Power Method
At each boosting step, we solve the constrained optimization problem in (15) using the truncated power method as given in Algorithm 2.
It is worth noting that in each step of gradient boosting is not guaranteed to be positive semidefinite. Thus, to ensure that the objective function to be nondecreasing, we add a positive diagonal matrix to the matrix for large enough such that is positive semidefinite and symmetric. Such change only adds a constant term to the objective function, which produces a different sequence of iterations, and there is a clear tradeoff. If dominates , the objective function becomes approximately a squared norm, and the algorithm tends to terminate in only a few iterations. In the limiting case of , the method will not move away from the initial iterate. To handle this issue, we adapt a stochastic method that gradually increase during the iterations and we do so only when the monotonicity is violated, as shown in the step 1 of Algorithm 2. This truncated power method allows fast computation of the largest sparse eigenvalue. For s highdimensional but sparse matrix , it also supports sparse matrix computation, which decreases the complexity from to , where is the number of iterations.
References
 [1] W. B. Johnson and J. Lindenstrauss, “Extensions of lipschitz mappings into a hilbert space,” Contemporary mathematics, vol. 26, no. 189206, p. 1, 1984.
 [2] L. Yang and R. Jin, “Distance metric learning: A comprehensive survey,” Michigan State Universiy, pp. 1–51, 2006.

[3]
P. Bühlmann and B. Yu, “Sparse boosting,”
The Journal of Machine Learning Research
, vol. 7, pp. 1001–1024, 2006.  [4] E. P. Xing, M. I. Jordan, S. Russell, and A. Ng, “Distance metric learning with application to clustering with sideinformation,” in Advances in neural information processing systems, pp. 505–512, 2002.
 [5] J. Blitzer, K. Q. Weinberger, and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” in Advances in neural information processing systems, pp. 1473–1480, 2005.
 [6] Y. Freund, “Boosting a weak learning algorithm by majority,” Information and computation, vol. 121, no. 2, pp. 256–285, 1995.
 [7] T. Hastie, R. Tibshirani, J. Friedman, T. Hastie, J. Friedman, and R. Tibshirani, The elements of statistical learning, vol. 2. Springer, 2009.
 [8] C. Shen, J. Kim, L. Wang, and A. Hengel, “Positive semidefinite metric learning with boosting,” in Advances in neural information processing systems, pp. 1651–1659, 2009.
 [9] J. Bi, D. Wu, L. Lu, M. Liu, Y. Tao, and M. Wolf, “Adaboost on lowrank psd matrices for metric learning,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 2617–2624, IEEE, 2011.
 [10] S. P. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
 [11] R. Rosales and G. Fung, “Learning sparse metrics via linear programming,” in Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 367–373, ACM, 2006.
 [12] Y. Ying, K. Huang, and C. Campbell, “Sparse metric learning via smooth optimization,” in Advances in neural information processing systems, pp. 2214–2222, 2009.
 [13] X.T. Yuan and T. Zhang, “Truncated power method for sparse eigenvalue problems,” The Journal of Machine Learning Research, vol. 14, no. 1, pp. 899–925, 2013.
 [14] P. J. Green, B. W. Silverman, B. W. Silverman, and B. W. Silverman, Nonparametric regression and generalized linear models: a roughness penalty approach. Chapman & Hall London, 1994.
 [15] L. Torresani and K.c. Lee, “Large margin component analysis,” in Advances in neural information processing systems, pp. 1385–1392, 2006.
 [16] D. Kedem, S. Tyree, F. Sha, G. R. Lanckriet, and K. Q. Weinberger, “Nonlinear metric learning,” in Advances in Neural Information Processing Systems, pp. 2573–2581, 2012.
 [17] J. H. Friedman, “Flexible metric nearest neighbor classification,” Unpublished manuscript available by anonymous FTP from playfair. stanford. edu (see pub/friedman/README), 1994.
 [18] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov, “Neighbourhood components analysis,” in Advances in neural information processing systems, pp. 513–520, MIT Press, 2005.
 [19] Y. Hong, Q. Li, J. Jiang, and Z. Tu, “Learning a mixture of sparse distance metrics for classification and dimensionality reduction,” in Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 906–913, IEEE, 2011.
 [20] W. Liu, S. Ma, D. Tao, J. Liu, and P. Liu, “Semisupervised sparse metric learning using alternating linearization optimization,” in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1139–1148, ACM, 2010.
 [21] G.J. Qi, J. Tang, Z.J. Zha, T.S. Chua, and H.J. Zhang, “An efficient sparse metric learning in highdimensional space via l 1penalized logdeterminant regularization,” in Proceedings of the 26th Annual International Conference on Machine Learning, pp. 841–848, ACM, 2009.
 [22] K. Huang, Y. Ying, and C. Campbell, “Gsml: A unified framework for sparse metric learning,” in Data Mining, 2009. ICDM’09. Ninth IEEE International Conference on, pp. 189–198, IEEE, 2009.
 [23] N. Aronszajn, “Theory of reproducing kernels,” Transactions of the American mathematical society, pp. 337–404, 1950.
 [24] J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” Annals of statistics, pp. 1189–1232, 2001.
 [25] J. H. Friedman, “Stochastic gradient boosting,” Computational Statistics & Data Analysis, vol. 38, no. 4, pp. 367–378, 2002.
 [26] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
 [27] M. Lichman, “UCI machine learning repository.” http://archive.ics.uci.edu/ml, 2013. University of California, Irvine, School of Information and Computer Sciences.
 [28] I. Guyon, J. Li, T. Mader, P. A. Pletscher, G. Schneider, and M. Uhr, “Feature selection with the clop package,” tech. rep., Technical report, http://clopinet. com/isabelle/Projects/ETH/TMfextractclass. pdf (2006, accessed on 10/10/2013), 2006.
 [29] I. Guyon, J. Li, T. Mader, P. A. Pletscher, G. Schneider, and M. Uhr, “Competitive baseline methods set new standards for the nips 2003 feature selection benchmark,” Pattern recognition letters, vol. 28, no. 12, pp. 1438–1444, 2007.
 [30] M. B. Kursa, W. R. Rudnicki, et al., “Feature selection with the boruta package,” Journal of Statistical Software, vol. 36, no. i11, 2010.

[31]
R. R. Suarez, J. M. ValenciaRamirez, and M. Graff, “Genetic programming as a feature selection algorithm,” in
Power, Electronics and Computing (ROPEC), 2014 IEEE International Autumn Meeting on, pp. 1–5, IEEE, 2014.  [32] T. Turki and