Sparse Algorithm for Robust LSSVM in Primal Space

02/07/2017 ∙ by Li Chen, et al. ∙ Xidian University NetEase, Inc 0

As enjoying the closed form solution, least squares support vector machine (LSSVM) has been widely used for classification and regression problems having the comparable performance with other types of SVMs. However, LSSVM has two drawbacks: sensitive to outliers and lacking sparseness. Robust LSSVM (R-LSSVM) overcomes the first partly via nonconvex truncated loss function, but the current algorithms for R-LSSVM with the dense solution are faced with the second drawback and are inefficient for training large-scale problems. In this paper, we interpret the robustness of R-LSSVM from a re-weighted viewpoint and give a primal R-LSSVM by the representer theorem. The new model may have sparse solution if the corresponding kernel matrix has low rank. Then approximating the kernel matrix by a low-rank matrix and smoothing the loss function by entropy penalty function, we propose a convergent sparse R-LSSVM (SR-LSSVM) algorithm to achieve the sparse solution of primal R-LSSVM, which overcomes two drawbacks of LSSVM simultaneously. The proposed algorithm has lower complexity than the existing algorithms and is very efficient for training large-scale problems. Many experimental results illustrate that SR-LSSVM can achieve better or comparable performance with less training time than related algorithms, especially for training large scale problems.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Least squares support vector machine (LSSVM) was introduced by SuykensSuykens1999

and has been a powerful learning technique for classification and regression. It has been successfully used in many real world pattern recognition problems, such as disease diagnosis

Duygu2011 , fault detectionLong2014 , image classification Yang2015

, partial differential equations solving

Mehrkanoon2015 and visual trackingGao2016 . LSSVM tries to minimize least squares errors on the training samples. Comparing with other SVMs, LSSVM is based on equality constraints rather than inequality ones, hence it has closed form solutions by solving a system of linear equations instead of solving a quadratic programming (QP) problem iteratively as other SVMs. So the training of LSSVM is simpler than other SVMs.

However, LSSVM has two main drawbacks. One is that it is sensitive to outliers, because outliers always have large support values (the values of Lagrange multiplier), which means that the influences of outliers are larger than other samples in constructing the decision function. Another is that the solution of LSSVM lacks sparse, which limits the method for training large scale problems.

In order to overcome the sensitivity to outliers of the LSSVM, Suykens et al.Suykens2002 proposed the weighted LSSVM (W-LSSVM) model by putting small weights on the less important samples or outliers to reduce their influence to the model. Some other weight setting strategies are proposed, see Valyon2003 You2011 . Theoretical analyses and the experimental results indicate that such methods are robust to outliers. But those methods need pre-solve the original LSSVM to set the weights, so they are all not suit for training large scale problems. Another technique to deal with robustness is on non-convex loss functions. Based on truncated least squares loss function, Wang et al.KuainiWang2014 and Yang et al.XiaoweiYang2014 presented robust LSSVM (R-LSSVM) model. Experimental results show that R-LSSVM model significantly reduces the effect of the outliers. However, the solutions to R-LSSVM by Yang’s and Wang’s algorithms both lack sparseness, and they need pre-compute the whole kernel matrix and the inverse of , hence they are both time consuming for the large scale data sets. They are even unable to handle the data sets containing more than 10,000 training samples on common computers.

There are also some methods to promote the sparsity of LSSVM. Suykens et al.Suykens2000 J.A.K.Suykens2002 proposed a pruning algorithm which iteratively remove a small amount of samples (5%) with smallest support values to impose sparseness. In this pruning algorithm, a retraining of LSSVM with the reduced training set is needed for each iteration, which leads to a large computation cost. Fixed-size least squares support vector machine (FS-LSSVM)Suykens2002 is another sparse algorithm. In this algorithm, some support vectors (SVs) referred to as prototype vectors are fixed in advance, and then they are replaced iteratively by samples which are randomly selected from the training set based on the quadratic Rényi entropy criterion. However, in each iteration, this method only computes the entropy of the samples that are selected in the working set rather than the whole data set, which may cause the sub-optimized solutions. Jiao et al.Jiao2007 presented the fast sparse approximation for LSSVM (FSA-LSSVM), in which an approximated decision function was built iteratively by adding the basis function from a kernel-based dictionary one by one until the

criterion satisfied. This algorithm obtains sparse classifiers at a rather low cost. But with the very sparse setting, the experimental results in

sszhou2016 show that FSA-LSSVM is not good on some training data sets. Zhousszhou2016 proposed pivoting Cholesky of primal LSSVM (PCP-LSSVM) which is an iterative method based on incomplete pivoting Cholesky factorization of the kernel matrix. Theoretical analyses and the experimental results indicate that PCP-LSSVM can obtain acceptable test accuracy by extreme sparse solution.

In this paper, we aim to obtain the sparse solution of the R-LSSVM model to overcome the two drawbacks of LSSVM simultaneously. New algorithm solves the R-LSSVM in primal space as Zhousszhou2016 did for LSSVM, and our main contributions can be summarized as follows:

  • By introducing an equivalent form of the truncated least squared loss function, we show that R-LSSVM is equivalent to a re-weighted LSSVM model, which explains the robustness of R-LSSVM.

  • We illustrate that representer theorem is also held for the non-convex loss function, and propose the primal R-LSSVM model which has a sparse solution if the kernel matrix is low rank.

  • We propose sparse R-LSSVM algorithm to obtain the sparse solution of R-LSSVM by applying low-rank approximation of the kernel matrix. The complexity of the new algorithm is lower than the existing non-sparse R-LSSVM algorithms.

  • A large number of experiments demonstrate that the proposed algorithm can process large-scale problems efficiently.

The rest of the paper is organized as follows. The brief descriptions of the R-LSSVM and its existing algorithms are given in section 2. In section 3, robustness of R-LSSVM is interpreted from a re-weighted viewpoint. In section 4, primal R-LSSVM and its smooth version are discussed, and the novel sparse algorithm is proposed. After that, the convergence and complexity of the new algorithm are analyzed. Section 5 includes some experiments to show the efficiency of the proposed algorithm. Section 6 concludes this paper.

2 Robust LSSVM model and the existing algorithms

In this section, we briefly summarize the R-LSSVM and the existing algorithms.

2.1 Robust LSSVM

Consider a training set with pairs of samples , where are the input data and or are the output targets corresponding to the inputs for classification or regression problems. The classical LSSVM model is described as follows:


where is the regularization parameter,

is the normal of the hyperplane,

is the bias, is a map which maps the input into a high-dimensional feature space, especially for managing the nonlinear learning problems, and is the least squares loss with being the predict error.

By replacing in (1) with the truncated least squares loss :


Wang et al.KuainiWang2014 and Yang et al.XiaoweiYang2014 introduced the Robust LSSVM (R-LSSVM):


where is the truncated parameter which controls the errors of the outliers. Fig. 2 plots the in (2) with , the least square loss and the difference between them . It is clear that the losses of the outliers (samples with larger errors) are bounded by , hence it reduce the effects of the outliers in R-LSSVM. We will investigate the robustness of the R-LSSVM from a re-weighted viewpoint in section 3.

2.2 Existing algorithms for R-LSSVM

The truncated least squares loss is non-convex and non-smooth, which can be easily observed by Fig. 2, but can be expressed as the difference between two convex functions and KuainiWang2014 , where


Then R-LSSVM can be transformed to a difference of convex (DC) programming:

Figure 1: Plots of the least squares loss (dashed), the truncated least squares loss (solid) and their difference (dotted-dashed), where
Figure 2: Plots of the truncated least squares loss (solid) and the smoothed truncated least squares loss (dashed) with

Wang et al.KuainiWang2014 and Yang et al.XiaoweiYang2014 solve the DC programming (5) by the Concave-Convex Procedure (CCCP). Then through different methods, they both focus on solving the following linear equations (6) iteratively.


where is the positive semi-definite kernel matrix satisfying , ,

is a identity matrix,

, , and is the value of at the -th iteration satisfying


where , is the -th row of the kernel matrix .

Through iteratively solving (6) with respect to and until convergence, the output deterministic function is .

In order to compute (7), Wang et al.KuainiWang2014 neglect the non-differentiability points in and adopt the following formula:


and Yang et al. compute (7) after smoothing the function by a piecewise quadratic functionXiaoweiYang2014 .

One limitation of these two algorithms is that the solution lacks sparseness. That is because the coefficient matrix of (6) is a nonsingular symmetric dense matrix and the vector on the right side of equations is dense. Hence the training speeds of these two algorithms are slow and they can not train large-scale problems efficiently.

3 Robustness of R-LSSVM from a re-weighted viewpoint

Wang et al.KuainiWang2014 illustrate the robustness of R-LSSVM only through experiments. Yang et al.XiaoweiYang2014 explain it from the relationship between the solutions of R-LSSVM and W-LSSVMJ.A.K.Suykens2002 . In this section, we will show that R-LSSVM enjoys the robustness from a re-weighted viewpointFeng2016 .

By the representer theorem in section 4.1, R-LSSVM can be translated into the following model in primal space without the implicit feature map :


In order to explain the robustness of the preceding model (9) more clearly, we propose an equivalent form of in Lemma 1 from the idea in Geman1995 Nikolova2005 .

Lemma 1.

can be expressed as






By Lemma 1 and the research of re-weighted LSSVM in Brabanter2009 , we have

Proposition 1.

Any stationary point of R-LSSVM (9) can be obtained by solving an iteratively re-weighted LSSVM as follows:


where is the value of -th iteration of the weight .


Substituting (10) into (9), we have


where . Since is nonconvex, only a stationary point of preceding minimization problem can be expected. Let be one of the stationary points of (9). By the analysis above, there exists such that be the solution of (14). On the other hand, if is any stationary point of (14), then also solves (9). Hence, we can iteratively solve (14) by alternating direction method (ADM)He2012 as follows:


Obviously, the optimization problem in (16) has the closed form solution as (12). The optimization problem in (15) is just the re-weighted LSSVM (13). ∎

Since denotes the predicted error, similar to the robustness analysis in article Feng2016 , the larger is, the more likely that the instance pair tends to be an outlier. From (12) and (13), it observes that when the is sufficiently large for the outlier instance , the corresponding weight in (13) will be 0. That is, the truncated least squares loss function can reduce the influence of samples which are far away from their true targets. This explains the robustness of R-LSSVM from the re-weighted viewpoint.

4 Sparse R-LSSVM algorithm

In this section, we give the primal R-LSSVM and propose the sparse algorithm to obtain the sparse solution of the R-LSSVM.

4.1 Primal R-LSSVM

If loss function is convex such as in LSSVM model (1), by duality theory, the optimal solution can be represented as


where . If loss function is nonconvex, the strong duality does not hold, hence we cannot get (17) by duality. However, by the representer theorem in Scholkopf2001 Shai2014 , it is easily to prove that (17) also holds.

Theorem 1.

Assume that is a mapping from to a Hilbert space. Then there exists a vector such that (17) is an optimal solution of (3) and (5).

Substituting (17) into (5), we get a DC programming with regard to and as follows:


with convex functions and . We call the model (18) or its equivalent form (9) as primal R-LSSVM for convenience.

Using CCCP method in KuainiWang2014 XiaoweiYang2014 Yuille2003 , the solution to the problem (18) can be obtained by iteratively solving the following convex QP until it converges:


where is the same as (7) with .

However, the computation of is not simple, since is non-differentiable at some points. Inspired by the idea in ShuishengZhou2013 , we smooth by the entropy penalty function. Let


then we have whenever . is the smooth approximation of , and the upper bound of the difference between and is . In practice, if we set sufficiently large such as , the difference between them can be neglected. Fig. 2 shows the comparison between and the smoothed truncated least squares loss function with .

The derivative of is:


Replacing with in (7), the in (19) is calculated as follows:


Yang et al.XiaoweiYang2014 also adopt a smooth procedure, but their method has to tune the smoothing parameter to get the best effect. That makes the parameter adjustment procedure complex. In comparison, our smoothing strategy based on entropy penalty function does not need to tune such parameter. What we need to do is set a large value for in (22).

4.2 Sparse solution for Primal R-LSSVM

After obtaining by (22), in (19) are the solutions of the following system of linear equations:


It seems that (23) is more complicated than (6) in a first sight. However, the coefficient matrix of (6) is nonsingular symmetric dense matrix, which leads to a non-sparse solution of (6). In comparison, the coefficients matrix of (23) may be low rank if the related kernel matrix is low rank or is approximated by a low rank matrix. In this situation, (23) may have sparse solution, which overcomes the limitation of the previous methods partly.

Now, we discuss the sparse optimization solution of (23) as soon as the kernel matrix can be approximated by a low rank matrix.

After simply calculation, we get the bias by (23). Eliminating , (23) is simplified to the following linear equation:


Nyström Approximation is a most popular method to obtain the low-rank approximation of kernel matrix (see Williams2001 Petros2005 Zhang2010 Si2016 and the references therein). The low-rank approximation method is not the point of this paper. For simplicity, we employ Zhou’s pivoting Cholesky factorization methodsszhou2016 . Let corresponding to the indices of the landmark points, be the sub-matrix of whose elements are for and , and be defined similarly. By the pivoting Cholesky factorization method in sszhou2016 , we can obtain the full column rank matrix satisfying as the best rank- Nyström type approximation of under the trace norm, and in all process only the selected columns and the diagonal of the kernel matrix are necessary. If is gotten by some other low-rank approximation methods Williams2001 Petros2005 Zhang2010 Si2016 , let and the following analysis is the same.

Substituting into (24) instead of , (24) is simplified as:


where is a identity matrix. By permuting rows of matrix , we get , where is a full rank matrix (and will be a lower triangular matrix if is obtained as sszhou2016 , hence is computed with cost instead of ), and is comprised by the rest rows of . Correspondingly, let , then we have


is the sparse solution of (25), where


So the sparse R-LSSVM (SR-LSSVM) algorithm is obtained by iteratively updating as follows:


4.3 Sparse R-LSSVM algorithm

From the above analysis, our SR-LSSVM algorithm is listed as Algorithm 1.

0:  Training set , kernel matrix , the regularization parameter , the truncated parameter , the stop criterion , .
0:   and .
1:  Find and such that , ;
2:  Compute as (27). Set ;
3:  Update and by (28) and (29);
4:  Set , and compute by (22);
5:  if  then
6:     stop with and ;
7:  else
8:     let , go to step 3.
9:  end if
Algorithm 1 SR-LSSVMSparse R-LSSVM

After obtaining the optimal and by Algorithm 1, the decision function for regression is:


For classification, the decision function is . We give some comments about Algorithm 1.

Comment 1. If let as the starting point, the first cycle of Algorithm 1 is equivalent to solving the primal LSSVM (P-LSSVM) problemsszhou2016 .

Comment 2. In equation (28), if we set

then and , so . In step 3 of Algorithm 1, we compute instead of , and the cost of step 3 is decreased further. The output can only be calculated at the last round by .

Comment 3. To promote computational efficiency, Equation (28) can be rewritten as:


where , , is the sparse solution of primal LSSVM, is the index set of nonzero elements of , is comprised by several rows of , and the indexes of these rows in correspond to the elements in , is a vector comprised of nonzero elements of .

Then the step 2 and 3 in Algorithm 1 can be replaced with the following:

Step 2’: Compute and . Set ;

Step 3’: Update and by (31) and (29) respectively.

Comment 4. In Algorithm 1, the parameter limits the upper bound of loss function. should not be set too large or small. The improper results in poor generalization performance. To overcome the sensitivity of the loss function to , we can tone as follows. Firstly, set a little larger , such as , where . Then add the following step between the step 3 and step 4 in Algorithm 1: reduce if is small until , where is the minimum of we set.

4.4 Convergence and Complexity analysis

CCCP is globally or locally convergent, see Yuille2003 Tao2014 Bharath2012 . Similar to the convergence proof of DCA (DC Algorithm) for general DC programs in article PHAM1997 , we have the following Lemma.

Lemma 2.

If the optimal value of the problem (18) is finite, and the infinite sequences and are bounded, then every limit point of the sequence is a generalized KKT point of .

Obvious, the objective function of (18) and (9) is bounded below. Assume the prediction error variable is bounded, which is reasonable in real application, then is bounded by (22). So and are also bounded because of the boundedness of , and in (28) and (29). By Lemma 2, we get the following theorem.

Theorem 2.

Assume the predict error is bounded for all given samples with selected parameters and , then limit point of the sequence is the generalized KKT point of the problem (18), that is, Algorithm 1 is convergent.

For Algorithm 1, the computation cost of step 1 and step 2 are both sszhou2016 . The complexity of iteratively solving step 3 is , where is the total iterative number of SR-LSSVM. So the overall complexity of this algorithm is . If we utilize the technique in comment 3 to compute , then the complexity of step in Algorithm 1 reduced to which is the complexity of step . In comparison, the computational complexities of Wang’s and Yang’s R-LSSVM algorithms in KuainiWang2014 and XiaoweiYang2014 are both , where is the iterations of their algorithms. It is obvious that our method has smaller computational complexity than existing approaches.

Parallel computing potential. In the Algorithm 1, some calculations are easy to perform, so serial computing is enough for them. However, for some costly calculations, we can utilize parallel computing to further improve computing efficiency. The main computational cost of Algorithm 1 is from computing , which can be implemented in parallel. For example, can be partitioned into chunks according to row satisfying , so which can be efficiently calculated by the parallel algorithm of matrix multiplication, where is the th block of the matrix .

5 Numerical experiments and discussions

To examine the validity of the proposed algorithm, we compare our SR-LSSVM with the R-LSSVM-WKuainiWang2014 (Wang’s algorithm for R-LSSVM), R-LSSVM-YXiaoweiYang2014 (Yang’s algorithm for R-LSSVM), the classical LSSVM, W-LSSVMJ.A.K.Suykens2002 , the FS-LSSVMSuykens2002 which is operated in the LS-SVMlab v1.8 software Brabanter2011_lssvmtoolbox 111Codes are available in and the SVMs (C-SVC for classification and -SVR for regression) which are implemented in the LIBSVM software222Codes are available in cjlin/libsvm/. for medium datasets. For some large-scale problems, we only compare the proposed algorithm with some sparse algorithms, such as PCP-LSSVMsszhou2016 333Codes and article can be downloaded from, FS-LSSVMSuykens2002 , Cholesky with side information (CSI)Bach_csi2005 444Codes are available in and C-SVC for classification or -SVR for regression, since the others can not apply in this case.

All computations are implemented in windows 8 with Matlab R2014a. The whole experiments are run on a PC with an Intel Core i5-4210U CPU and a maximum of 8G bytes of memory available for all processes.

We fixed the values of smoothing parameter in SR-LSSVM and the stop criterion respectively. For all the data sets, we use cross-validation procedure and grid search to search the best values of the parameter and , where is the parameter in Gaussian kernel function , and is the smooth parameter in method R-LSSVM-Y.

For R-LSSVM-W and R-LSSVM-Y, the running time in our article is much less than those in KuainiWang2014 XiaoweiYang2014 for the same data sets and the total complexity is reduced from KuainiWang2014 XiaoweiYang2014 to , where the coefficient matrix of (6) is decomposed by Cholesky factorization once and such decomposition is unchanged per loop in our experiments.

5.1 Classification experiments

In this section, we test one synthetic classification data set and some benchmark classification data sets to illustrate the effectiveness of the SR-LSSVM. For benchmark datasets, each attribute of the samples is normalized into , and these datasets are separated into two groups: the medium size datasets group and the large-scale datasets group. All of them are downloaded from lib . The experimental results on Adult data set show the reason why we separate these data sets into two groups. Finally, we test the robustness of our proposed algorithm for large-scale data sets with outliers on Cod-RNA dataset. Outliers are generated by the following procedure. We choose 30% of samples which are far from decision hyperplane, then randomly sample 1/3 of them and flip their labels to simulate outliers.

5.1.1 Synthetic classification dataset experiment

Figure 3: Comparison of the proposed approach SR-LSSVM with LSSVM, W-LSSVM and R-LSSVM-Y for linearly inseparable classification dataset with and without outliers. The numbers of the support vectors are both 2 for datasets with and without outliers for (d) SR-LSSVM. We do not mark SVs in the subgraphs (a)-(c), because almost all of training samples are SVs for LSSVM, W-LSSVM and R-LSSVM-Y. For dataset without outliers, the test accuracies are all 91.50% for these four algorithms, and for dataset with outliers, the test accuracies are 89.50%, 90.00%, 91.50% and 91.50% for LSSVM, W-LSSVM, R-LSSVM-Y and SR-LSSVM respectively

To compare the robustness and spareness of four algorithms LSSVM, W-LSSVM, R-LSSVM-Y and SR-LSSVM, we conduct an experiment on a linear binary classification data set including 60 training samples and 100 testing samples. Fig. 3 shows the experimental results. To simulate outliers, we add 4 training samples labeled with wrong classes. They are marked as ’’ and ’’ for positive and negative classes respectively. Through grid search, we obtain the best parameter values for this data set are .

Fig. 3 illustrates that the decision lines of algorithms LSSVM and W-LSSVM change greatly and these two methods have lower accuracies than SR-LSSVM and R-LSSVM-Y after adding outliers. In contrast, the decision boundaries of SR-LSSVM and R-LSSVM-Y are almost unchanged and the accuracies of these two approaches remain stable before and after adding outliers. So SR-LSSVM is insensitive to outliers. Moreover, almost all of training samples are SVs for LSSVM, W-LSSVM and R-LSSVM-Y. By contrast, for SR-LSSVM, the support vector sizes are both only 2 for data sets with and without outliers. So the proposed algorithm is sparseness, which can accelerate the training speed of our approach in processing large scale problems.

5.1.2 Medium-scale benchmark classification datasets experiments

Data Algorithms Iterations Training nSVs Accuracies(%)
(Train, Time(s)
Pendigits C-SVC - - - 0.36(0.02) 433.5(7.7) 99.95(0.001)
(1466, LSSVM - - - 0.12(0.01) 1466() 99.26(0.005)
733) W-LSSVM - - - 0.18(0.01) 1464.6(1.7) 99.92(0.002)
FS-LSSVM - - - 0.19(0.01) 73(0) 99.90(0.001)
R-LSSVM-W 1.5 - 16.1(1.7) 0.22(0.01) 1466(0) 99.37(0.003)
R-LSSVM-Y 1.5 0.25 12.2(1.5) 0.19(0.01) 1436.8(8.0) 99.09(0.005)
SR-LSSVM 1.5 - 8.5(0.9) 0.03() 73() 99.96(0.001)
Protein C-SVC - - - 55.64(0.47) 5486.8(27.1) 77.98()
(8186, LSSVM - - - 22.67(0.30) 8185.9(0.32) 78.22(0.002)
3509) W-LSSVM - - - 27.77(0.30) 8185.7(0.67) 78.24(0.003)
FS-LSSVM - - - 27.55(0.61) 408.1(1.20) 77.00(0.003)
R-LSSVM-W 0.8 - 34.9(4.2) 28.56(1.10) 8184.8(0.92) 77.81(0.023)
R-LSSVM-Y 0.8 0.7 16.7(3.4) 25.17(0.81) 7876.7(46.6) 78.23(0.004)
SR-LSSVM 0.8 - 6(0) 11.65() 409() 78.04(0.002)
Satimage C-SVC - - - 0.25(0.01) 693.5(13.6) 99.86()
(2110, LSSVM - - - 0.76(0.01) 2109.6(0.7) 99.23(0.002)
931) W-LSSVM - - - 0.90(0.02) 1897.5(2.1) 99.91(0.001)
FS-LSSVM - - - 0.50(0.02) 105(0) 97.93(0.008)
R-LSSVM-W 0.5 - 17.1(1.5) 0.88(0.03) 2107.3(1.2) 99.90(0.001)
R-LSSVM-Y 0.5 0.3 11.7(1.2) 0.87(0.03) 1916.5(15.1) 99.88(0.001)
SR-LSSVM 0.5 - 6(2.7) 0.27() 105() 99.97(0.001)
USPS C-SVC - - - 0.92(0.02) 646.9(14.4) 99.34()
(2199, LSSVM - - - 2.53(0.01) 2198.7(0.5) 99.34(0.002)
623) W-LSSVM - - - 2.67(0.01) 1973.8(3.1) 99.49(0.001)
FS-LSSVM - - - 1.21(0.01) 109(0) 98.28(0.006)
R-LSSVM-W 1.1 - 9.3(0.84) 2.60(0.02) 2193.7(2.7) 99.52(0.000)
R-LSSVM-Y 1.1 0.15 10.2(0.92) 2.60(0.02) 2002.4(14.9) 99.52(0.000)
SR-LSSVM 1.1 - 6(0.79) 1.91() 108.7() 99.52(0.000)
Splice C-SVC - - - 0.12(0.01) 820.8(8.3) 76.38()
(1000, LSSVM - - - 0.19(0.01) 1000(0) 75.99(0.10)
2175) W-LSSVM - - - 0.20(0.01) 1000(0) 76.04(0.10)
FS-LSSVM - - - 0.42(0.02) 100(0) 76.66(0.07)
R-LSSVM-W 0.9 - 33.8(7.2) 0.29(0.03) 1000(0) 75.15(0.14)
R-LSSVM-Y 0.9 0.5 15.1(2.9) 0.22(0.01) 947.6(19.5) 80.50(0.04)
SR-LSSVM 0.9 - 25.8(7.1) 0.19() 100() 81.27(0.03)
Mushrooms C-SVC - - - 2.33(0.04) 2244.8(27.1) 99.99()
(5614, LSSVM - - - 5.75(0.08) 5415.8(0.4) 98.67(0.002)
2708) W-LSSVM - - - 7.41(0.19) 4837.8(12.2) 99.90(0.001)
FS-LSSVM - - - 2.69(0.02) 268.9(0.7) 99.66(0.003)
R-LSSVM-W 0.6 - 26.4(4.0) 7.57(0.22) 5381.8(6.6) 99.97(0.000)
R-LSSVM-Y 0.6 0.3 22.7(4.2) 7.48(0.18) 4928(16.9) 99.71(0.001)
SR-LSSVM 0.6 - 6(0) 1.28() 270() 100(0)
  • Pendigits is a pen-based recognition of handwritten digits data set to classify the digits 0 to 9. We only classify the digit 3 versus 4 here.

  • Protein is a multi-class data set with 3 classes. Here a binary classification problem is trained to separate class 1 from 2.

  • Satimage is comprised by 6 classes. Here the task of classifying class 1 versus 6 is trained.

  • USPS is a muti-class data set with 10 classes. Here a binary classification problem is trained to separate class 1 from 2.

Table 1:

Comparison of the numbers of iterations, training time (seconds), mean number of support vectors (denoted by nSVs) and accuracies (%) of different algorithms on benchmark classification data sets with outliers (10%). The standard deviations are given in brackets. ’-’ means this parameter is not used by this method. The best values are highlighted in bold.

Table 1 reports the data information, optimal parameters and experimental results for the medium-scale classification data sets with outliers. The best results are highlighted in bold. In Table 1, we set for SR-LSSVM and FS-LSSVM for all data sets except Splice (). For C-SVC, the parameter . All the algorithms independently operate 10 times to get the unbiased results.

As regard to accuracies, Table 1 illustrates that our proposed method SR-LSSVM has higher accuracies than any other compared approaches on most data sets. As to training time, our method is faster than other approaches except C-SVC. C-SVC performs well on some medium scale data sets in training speed, but on some larger scale data sets such as Protein and Mushroom, the running speeds of C-SVC are slower than SR-LSSVM. In addition, the accuracies of C-SVC is lower than SR-LSSVM.

In terms of sparseness, SR-LSSVM and FS-LSSVM need much fewer support vectors than other approaches. In other words, these two methods have sparseness. But the accuracy of FS-LSSVM is lower than SR-LSSVM, and FS-LSSVM spends more time than SR-LSSVM on all data sets. C-SVC also displays sparsity but its support vector size is much larger than SR-LSSVM and FS-LSSVM, partly because there exist outliers in the training set.

As respect to iteration times of solving nonconvex programming R-LSSVM, SR-LSSVM needs less iterations than R-LSSVM-W and R-LSSVM-Y to converge to the optimal solution.

5.1.3 Adult data set experiments

Figure 4: The training time, accuracies and their standard deviations of different algorithms on six subsets of the Adult data set with outliers (about 10%). The horizontal axis is the logarithmic coordinate. The markers 4K to 20K denote 4000 to 20000 samples in the training stages. ’all’ means using all 32561 samples. LSSVM, W-LSSVM, R-LSSVM-W and R-LSSVM-Y are only implemented on the data sets containing less than 15000 samples due to memory limitation of computer. The basic subset size for SR-LSSVM and working set size for FS-LSSVM are both set as 400. In figure, the more left and upper, the better

To investigate the performance of each algorithm on data sets in different sizes, we randomly choose 4000, 8000, 10000, 15000, 20000 and all the 32561 training samples from the training set of Adult data setlib . The test set size is 16281.

Fig. 4 shows the experimental results of all the approaches on the data sets with outliers. The horizontal axis is the logarithmic coordinate in the figure. As to accuracies on these data sets, in general, SR-LSSVM, C-SVC and FS-LSSVM perform better than other methods, and our method SR-LSSVM performs the best. In addition, from Fig. 4, we can draw the conclusion that for the medium scale training data sets, especially those with size smaller than 8000, every algorithm runs fast. However, if the training set size exceeds 20000, LSSVM, W-LSSVM, R-LSSVM-W and R-LSSVM-Y cannot operate on our common computer due to lack of memory. So for the large scale benchmark data sets, we do not compare our method SR-LSSVM with LSSVM, W-LSSVM, R-LSSVM-W and R-LSSVM-Y. Moreover, Fig. 4 also shows that the training time of C-SVC increases rapidly as the sizes of training samples grow larger.

5.1.4 Large-scale benchmark classification datasets experiments

Table 2 reports the data information, optimal parameters and experimental results for the large-scale data sets with outliers (10%). We compare our SR-LSSVM with some other sparse algorithms. For Skin-nonskin data set, we randomly select 2/3 of the data as training samples and the rest of the data as testing samples, and for others, we use the default setting in lib . All the algorithms operate 5 times independently to get the unbiased results for every dataset. The best results are highlighted in bold.

Data Sets Algorithms Training nSVs Accuracies(%)
(train, test) Time(s)
Skin-nonskin C-SVC - 1329.4() 59910() 99.30()
(163371, 81686) FS-LSSVM - 42.6(0.7) 199.2(0.8) 99.82(0.000)
CSI - 42.9(0.7) 100(0) 99.84(0.000)
PCP-LSSVM - 32.7(0.6) 400(0) 99.86(0.000)
SR-LSSVM 1.5 34.8() 400() 99.86(0.000)
IJCNN1 C-SVC - 84.3() 17103() 92.16()
(49990,91701) FS-LSSVM - 34.9(0.8) 398.8(1.0) 94.17(0.014)
CSI - 63.0(1.2)