1 Introduction
Ridge regression is a popular technique in machine learning and statistics. It is a commonly used penalized regression method. Regularized Least Squares Classifier (RLSC) is a simple classifier based on least squares and has a long history in machine learning
(Zhang and Peng, 2004; Poggio and Smale, 2003; Rifkin et al., 2003; Fung and Mangasarian, 2001; Suykens and Vandewalle, 1999; Zhang and Oles, 2001; Agarwal, 2002). RLSC is also the classification analogue to ridge regression. RLSC has been known to perform comparably to the popular Support Vector Machines (SVM)
(Rifkin et al., 2003; Fung and Mangasarian, 2001; Suykens and Vandewalle, 1999; Zhang and Oles, 2001). RLSC can be solved by simple vector space operations and do not require quadratic optimization techniques like SVM.We propose a deterministic feature selection technique for RLSC with provable guarantees. There exist numerous feature selection techniques, which work well empirically. There also exist randomized feature selection methods like leveragescore sampling, (Dasgupta et al., 2007)
with provable guarantees which work well empirically. But the randomized methods have a failure probability and have to be rerun multiple times to get accurate results. Also, a randomized algorithm may not select the same features in different runs. A deterministic algorithm will select the same features irrespective of how many times it is run. This becomes important in many applications. Unsupervised feature selection involves selecting features oblivious to the class or labels.
In this work, we present a new provably accurate unsupervised feature selection technique for RLSC. We study a deterministic sampling based feature selection strategy for RLSC with provable nontrivial worstcase performance bounds.
We also use singleset spectral sparsification and leveragescore sampling as unsupervised feature selection algorithms for ridge regression in the fixed design setting. Since the methods are unsupervised, it will ensure that the methods work well in the fixed design setting, where the target variables have an additive homoskedastic noise. The algorithms sample a subset of the features from the original data matrix and then perform regression task on the reduced dimension matrix. We provide risk bounds for the feature selection algorithms on ridge regression in the fixed design setting.
The number of features selected by both algorithms is proportional to the rank of the training set. The deterministic samplingbased feature selection algorithm performs better in practice when compared to existing methods of feature selection.
2 Our Contributions
We introduce singleset spectral sparsification as a provably accurate deterministic feature selection technique for RLSC in an unsupervised setting. The number of features selected by the algorithm is independent of the number of features, but depends on the number of datapoints. The algorithm selects a small number of features and solves the classification problem using those features. Dasgupta et al. (2007) used a leveragescore based randomized feature selection technique for RLSC and provided worst case guarantees of the approximate classifier function to that using all features. We use a deterministic algorithm to provide worstcase generalization error guarantees. The deterministic algorithm does not come with a failure probability and the number of features required by the deterministic algorithm is lesser than that required by the randomized algorithm. The leveragescore based algorithm has a sampling complexity of
, whereas singleset spectral sparsification requires to be picked, where is the number of training points, is a failure probability and is an accuracy parameter. Like in Dasgupta et al. (2007), we also provide additiveerror approximation guarantees for any testpoint and relativeerror approximation guarantees for testpoints that satisfy some conditions with respect to the training set.
We introduce singleset spectral sparsification and leveragescore sampling as unsupervised feature selection algorithms for ridge regression and provide risk bounds for the subsampled problems in the fixed design setting. The risk in the sampled space is comparable to the risk in the fullfeature space. We give relativeerror guarantees of the risk for both feature selection methods in the fixed design setting.
From an empirical perspective, we evaluate singleset spectral sparsification on synthetic data and 48 documentterm matrices, which are a subset of the TechTC300 (Davidov et al., 2004) dataset. We compare the singleset spectral sparsification algorithm with leveragescore sampling, information gain, rankrevealing QR factorization (RRQR) and random feature selection. We do not report running times because feature selection is an offline task. The experimental results indicate that singleset spectral sparsification outperforms all the methods in terms of outofsample error for all 48 TechTC300 datasets. We observe that a much smaller number of features is required by the deterministic algorithm to achieve good performance when compared to leveragescore sampling.
3 Background and Related Work
3.1 Notation
denote matrices and denote column vectors; (for all ) is the standard basis, whose dimensionality will be clear from context; and is the identity matrix. The Singular Value Decomposition (SVD) of a matrix is equal to where
is an orthogonal matrix containing the left singular vectors,
is a diagonal matrix containing the singular values , and is a matrix containing the right singular vectors. The spectral norm of is . and are the largest and smallest singular values of . is the condition number of . denotes any orthogonal matrix whose columns span the subspace orthogonal to . A vector can be expressed as: for some vectors and , i.e. has one component along and another component orthogonal to .3.2 Matrix Sampling Formalism
We now present the tools of feature selection. Let be the data matrix consisting of points and dimensions, be a matrix such that contains rows of Matrix is a binary indicator matrix, which has exactly one nonzero element in each row. The nonzero element of indicates which row of will be selected. Let be the diagonal matrix such that rescales the rows of that are in The matrices and are called the sampling and rescaling matrices respectively. We will replace the sampling and rescaling matrices by a single matrix , where denotes the matrix specifying which of the rows of are to be sampled and how they are to be rescaled.
3.3 RLSC Basics
Consider a training data of points in dimensions with respective labels for
The solution of binary classification problems via Tikhonov regularization in a Reproducing Kernel Hilbert Space (RKHS) using the squared loss function results in Regularized Least Squares Classification (RLSC) problem
(Rifkin et al., 2003), which can be stated as:(1) 
where is the kernel matrix defined over the training dataset, is a regularization parameter and is the dimensional class label vector. In matrix notation, the training dataset is a matrix, consisting of datapoints and features . Throughout this study, we assume that is a fullrank matrix. We shall consider the linear kernel, which can be written as Using the SVD of , the optimal solution of Eqn. 1 in the fulldimensional space is
(2) 
The vector can be used as a classification function that generalizes to test data. If is the new test point, then the binary classification function is:
(3) 
Then, gives the predicted label ( or ) to be assigned to the new test point .
Our goal is to study how RLSC performs when the deterministic sampling based feature selection algorithm is used to select features in an unsupervised setting. Let be the matrix that samples and rescales rows of thus reducing the dimensionality of the training set from to and is proportional to the rank of the input matrix. The transformed dataset into dimensions is given by and the RLSC problem becomes
(4) 
thus giving an optimal vector . The new test point is first dimensionally reduced to , where and then classified by the function,
(5) 
In subsequent sections, we will assume that the testpoint is of the form The first part of the expression shows the portion of the testpoint that is similar to the trainingset and the second part shows how much the testpoint is novel compared to the training set, i.e. measures how much of lies outside the subspace spanned by the training set.
3.4 Ridge Regression Basics
Consider a dataset of points in dimensions with . Here contains i.i.d samples from the dimensional independent variable. is the realvalued response vector. Ridge Regression(RR) or Tikhonov regularization penalizes the norm of a parameter vector
and shrinks the estimated coefficients towards zero. In the fixed design setting, we have
whereis the homoskedastic noise vector with mean 0 and variance
. Let be the solution to the ridge regression problem. The RR problem is stated as:(6) 
The solution to Eqn.6 is . One can also solve the same problem in the dual space. Using change of variables, , where and let be the linear kernel defined over the training dataset. The optimization problem becomes:
(7) 
Throughout this study, we assume that is a fullrank matrix. Using the SVD of , the optimal solution in the dual space (Eqn. 7) for the fulldimensional data is given by The primal solution is
In the sampled space, we have The dual problem in the sampled space can be posed as:
(8) 
The optimal dual solution in the sampled space is The primal solution is
3.5 Related Work
The work most closely related to ours is that of Dasgupta et al. (2007) who used a leveragescore based randomized feature selection technique for RLSC and provided worst case bounds of the approximate classifier with that of the classifier for all features. The proof of their main qualityofapproximation results provided an intuition of the circumstances when their feature selection method will work well. The running time of leveragescore based sampling is dominated by the time to compute SVD of the training set i.e. , whereas, for singleset spectral sparsification, it is . Singleset spectral sparsification is a slower and more accurate method than leveragescore sampling. Another work on dimensionality reduction of RLSC is that of Avron et al. (2013) who used efficient randomizedalgorithms for solving RLSC, in settings where the design matrix has a Vandermonde structure. However, this technique is different from ours, since their work is focused on dimensionality reduction using linear combinations of features, but not on actual feature selection.
Lu et al. (2013) used Randomized WalshHadamard transform to lower the dimension of data matrix and subsequently solve the ridge regression problem in the lower dimensional space. They provided riskbounds of their algorithm in the fixed design setting. However, this is different from our work, since they use linear combinations of features, while we select actual features from the data.
4 Our main tools
4.1 Singleset Spectral Sparsification
We describe the SingleSet Spectral Sparsification algorithm (BSS^{1}^{1}1The name BSS comes from the authors Batson, Spielman and Srivastava. for short) of Batson et al. (2009) as Algorithm 1. Algorithm 1 is a greedy technique that selects columns one at a time. Consider the input matrix as a set of column vectors , with Given and , we iterate over . Define the parameters and . For and
a symmetric positive definite matrix with eigenvalues
, defineas the lower and upper potentials respectively. These potential functions measure how far the eigenvalues of are from the upper and lower barriers and respectively. We define and as follows:
At every iteration, there exists an index and a weight such that, and Thus, there will be at most columns selected after iterations. The running time of the algorithm is dominated by the search for an index satisfying
and computing the weight One needs to compute the upper and lower potentials and and hence the eigenvalues of . Cost per iteration is and the total cost is For , we need to compute and for every which can be done in for every iteration, for a total of Thus total running time of the algorithm is We present the following lemma for the singleset spectral sparsification algorithm.
Lemma 1.
BSS (Batson et al., 2009): Given satisfying and , we can deterministically construct sampling and rescaling matrices and with , such that, for all
We now present a slightly modified version of Lemma 1 for our theorems.
Lemma 2.
Given satisfying and , we can deterministically construct sampling and rescaling matrices and such that for ,
Proof.
Note: Let It is possible to set an upper bound on by setting the value of . We will assume . ∎
4.2 Leverage Score Sampling
Our randomized feature selection method is based on importance sampling or the socalled leveragescore sampling of Rudelson and Vershynin (2007). Let be the top left singular vectors of the training set
. A carefully chosen probability distribution of the form
(9) 
i.e. proportional to the squared Euclidean norms of the rows of the leftsingular vectors and select rows of in i.i.d trials and rescale the rows with . The time complexity is dominated by the time to compute the SVD of .
5 Theory
In this section we describe the theoretical guarantees of RLSC using BSS and also the risk bounds of ridge regression using BSS and Leveragescore sampling. Before we begin, we state the following lemmas from numerical linear algebra which will be required for our proofs.
Lemma 4.
(Stewart and Sun, 1990) For any matrix , such that is invertible,
Lemma 5.
(Stewart and Sun, 1990) Let and be invertible matrices. Then
Lemma 6.
(Demmel and Veselic, 1992) Let and be matrices such that the product is a symmetric positive definite matrix with matrix . Let the product be a perturbation such that, Here corresponds to the smallest eigenvalue of . Let be the ith eigenvalue of and let be the ith eigenvalue of Then,
Lemma 7.
Let . Then
The proof of this lemma is similar to Lemma 4.3 of Drineas et al. (2006).
5.1 Our Main Theroems on RLSC
The following theorem shows the additive error guarantees of the generalization bounds of the approximate classifer with that of the classifier with no feature selection. The classification error bound of BSS on RLSC depends on the condition number of the training set and on how much of the testset lies in the subspace of the training set.
Theorem 1.
Let be an accuracy parameter, be the number of features selected by BSS. Let be the matrix, as defined in Lemma 2. Let with , be the training set, is the reduced dimensional matrix and be the test point of the form . Then, the following hold:

If , then

If , then
Proof.
We assume that is a fullrank matrix. Let and . Using the SVD of , we define
(10) 
The optimal solution in the sampled space is given by,
(11) 
It can be proven easily that and are invertible matrices. We focus on the term Using the SVD of , we get
(12)  
(13) 
Eqn(12) follows because of the fact and by substituting from Eqn.(2). Eqn.(13) follows from the fact that the matrices and are invertible. Now,
(15)  
We bound (15) and (15) separately. Substituting the values of and ,
(16)  
The last line follows from Lemma 4 in Appendix, which states that , where . The spectral norm of is bounded by,
(17) 
We now bound (15). Substituting (13) and (16) in (15),
The last line follows because of Lemma 5 and the fact that all matrices involved are invertible. Here,
Since the spectral norms of and are bounded, we only need to bound the spectral norm of to bound the spectral norm of . The spectral norm of the matrix is the inverse of the smallest singular value of From perturbation theory of matrices Stewart and Sun (1990) and (17), we get
Here, represents the singular value of the matrix .
Also, where are the singular values of .
Thus,
Here, and denote the largest and smallest singular value of . Since , (condition number of ) we bound (15):
(18) 
For , the term in Eqn.(18) is always larger than , so it can be upper bounded by (assuming ). Also,
This follows from the fact, that and as is a fullrank orthonormal matrix and the singular values of are equal to ; making the spectral norm of its inverse at most one. Thus we get,
(19) 
We now bound (15). Expanding (15) using SVD and ,
The first inequality follows from ; and the second inequality follows from Lemma 7. To conclude the proof, we bound the spectral norm of . Note that from Eqn.(10), and ,
One can get a lower bound for the smallest singular value of using matrix perturbation theory and by comparing the singular values of this matrix to the singular values of We get,
(20)  
We assumed that , which implies Combining these, we get,
(21) 
Combining Eqns (19) and (21) we complete the proof for the case . For , Eqn.(18) becomes zero and the result follows. ∎
Our next theorem provides relativeerror guarantees to the bound on the classification error when the testpoint has nonew components, i.e.
Theorem 2.
Let be an accuracy parameter, be the number of features selected by BSS and . Let be the test point of the form , i.e. it lies entirely in the subspace spanned by the training set, and the two vectors and satisfy the property,
Comments
There are no comments yet.