Theory of Dual-sparse Regularized Randomized Reduction

04/15/2015 ∙ by Tianbao Yang, et al. ∙ 0

In this paper, we study randomized reduction methods, which reduce high-dimensional features into low-dimensional space by randomized methods (e.g., random projection, random hashing), for large-scale high-dimensional classification. Previous theoretical results on randomized reduction methods hinge on strong assumptions about the data, e.g., low rank of the data matrix or a large separable margin of classification, which hinder their applications in broad domains. To address these limitations, we propose dual-sparse regularized randomized reduction methods that introduce a sparse regularizer into the reduced dual problem. Under a mild condition that the original dual solution is a (nearly) sparse vector, we show that the resulting dual solution is close to the original dual solution and concentrates on its support set. In numerical experiments, we present an empirical study to support the analysis and we also present a novel application of the dual-sparse regularized randomized reduction methods to reducing the communication cost of distributed learning from large-scale high-dimensional data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As the scale and dimensionality of data continue to grow in many applications (e.g., bioinformatics, finance, computer vision, medical informatics) 

(Sánchez et al., 2013; Mitchell et al., 2004; Simianer et al., 2012; Bartz et al., 2011), it becomes critical to develop efficient and effective algorithms to solve big data machine learning problems. Randomized reduction methods for large-scale or high-dimensional data analytics have received a great deal of attention in recent years (Mahoney & Drineas, 2009; Shi et al., 2012; Paul et al., 2013; Weinberger et al., 2009; Mahoney, 2011). By either reducing the dimensionality (referred to as feature reduction) or reducing the number of training instances (referred to as instance reduction), the resulting problem has a smaller size of training data that is not only memory-efficient but also computation-efficient. While randomized instance reduction has been studied a lot for fast least square regression (Drineas et al., 2008, 2006, 2011; Ma et al., 2014)

, randomized feature reduction is more popular for linear classification 

(Blum, 2005; Shi et al., 2012; Paul et al., 2013; Weinberger et al., 2009; Shi et al., 2009a) (e.g., random hashing is a noticeable built-in tool in Vowpal Wabbit 111http://hunch.net/~vw/, a fast learning library, for solving high-dimensional problems.). In this paper, we focus on the latter technique and refer to randomized feature reduction as randomized reduction for short.

Although several theoretical properties have been examined for randomized reduction methods when applied to classification, e.g., generalization performance (Paul et al., 2013), preservation of margin (Blum, 2005; Balcan et al., 2006; Shi et al., 2012) and the recovery error of the model (Zhang et al., 2014), these previous results reply on strong assumptions about the data. For example, both (Paul et al., 2013) and (Zhang et al., 2014) assume the data matrix is of low-rank, and (Blum, 2005; Balcan et al., 2006; Shi et al., 2012)

make a assumption that all examples in the original space are separated with a positive margin (with a high probability). Another analysis in 

(Zhang et al., 2014) assumes the weight vector for classification is sparse. These assumptions are too strong to hold in many real applications.

Contributions. To address these limitations, we propose dual-sparse regularized randomized reduction methods referred to as DSRR by leveraging the (near) sparsity of dual solutions for large-scale high-dimensional (LSHD) classification problems (i.e., the number of (effective) support vectors is small compared to the total number of examples). In particular, we add a dual-sparse regularizer into the reduced dual problem. We present a novel theoretical analysis of the recovery error of the dual variables and the primal variable and study its implication for different randomized reduction methods (e.g., random projection, random hashing and random sampling).

Novelties. Compared with previous works (Blum, 2005; Balcan et al., 2006; Shi et al., 2012; Paul et al., 2013)

, our theoretical analysis demands a mild assumption about the data and directly provides guarantee on a small recovery error of the obtained model, which is critical for subsequent analysis, e.g., feature selection 

(Guyon et al., 2002; Brank et al., 2002) and model interpretation (Rätsch et al., 2005; Sonnenburg & Franc, 2010; R tsch et al., 2005; Sonnenburg et al., 2007; Ben-Hur et al., 2008)

. For example, when exploiting a linear model to classify people into sick or not sick based on genomic markers, the learned weight vector is important for understanding the effect of different genomic markers on the disease and for designing effective medicine 

(Jostins & Barrett, 2011; Kang & Cho, 2011). In addition, the recovery could also increase the predictive performance, in particular when there exists noise in the original features (Goldberger et al., 2005).

Compared with (Zhang et al., 2014)

that proposes to recover a linear model in the original feature space by dual recovery, i.e., constructing a weight vector using the dual variables learned from the reduced problem and the original feature vectors, our methods are better in that (i) we rely on a more realistic assumption of the sparsity of dual variables (e.g., in support vector machine (SVM)); (ii) we analyze both smooth loss functions and non-smooth loss functions (they focused on smooth functions); (iii) we study different randomized reduction methods in the same framework not just the random projection.

In numerical experiments, we present an empirical study on a real data set to support our analysis and we also demonstrate a novel application of the reduction and recovery framework in distributed learning from LSHD data, which combines the benefits of the two complementary techniques for addressing big data problems. Distributed learning/optimization recently receives significant interest in solving big data problems (Jaggi et al., 2014; Li et al., 2014; Yang, 2013; Agarwal et al., 2011). However, it is notorious for high communication cost, especially when the dimensionality of data is very high. By solving a dimensionality reduced data problem and using the recovered solution as an initial solution to the distributed optimization on the original data, we can reduce the number of iterations and the communication cost. In practice, we employ the recently developed distributed stochastic dual coordinate ascent algorithm (Yang, 2013), and observe that using the recovered solution as an initial solution we are able to attain almost the same performance with only one or two communications of high dimensional vectors among multiple machines.

2 Preliminaries

Let denote a set of training examples, where . Assume both and are very large. The goal of classification is to solve the following optimization problem:

(1)

where is a convex loss function and is a regularization parameter. Using the conjugate function, we can turn the problem into a dual problem:

(2)

where is the data matrix and is the convex conjugate function of . Given the optimal dual solution , the optimal primal solution can be computed by . For LSHD problems, directly solving the primal problem (1) or the dual problem (2) could be very expensive. We aim to address the challenge by randomized reduction methods. Let denote a randomized reduction operator that reduces a -dimensional feature vector into -dimensional feature vector. Let denote the reduced feature vector. With the reduced feature vectors of the training examples, a conventional approach is to solve the following reduced primal problem

(3)

or its the dual problem

(4)

where . Previous studies have analyzed the reduced problems for random projection methods and proved the preservation of margin (Blum, 2005; Shi et al., 2012) and the preservation of minimum enclosing ball (Paul et al., 2013). Zhang et al. (2014) proposed a dual recovery approach that constructs a recovered solution by and proved the recovery error for random projection under the assumption of low-rank data matrix or sparse . In addition, they also showed that the naive recovery by (when ) has a large recovery error.

One deficiency with the simple dual recovery approach is that due to the reduction in the feature space, many non-support vectors for the original optimization problem will become support vectors, which could result in the corruption in the recovery error. As a result, the original analysis of dual recovery method requires a strong assumption of data (i.e., the low rank assumption). In this work, we plan to address this limitation in a different way, which allows us to relax the assumption significantly.

3 DSRR and its Guarantee

To reduce the number of or the contribution of training instances that are non-support vectors in the original optimization problem and are transformed into support vectors due to the reduction of the feature space, we employ a simple trick that adds a dual-sparse regularization to the reduced dual problem. In particular, we solve the following problem:

(5)

where , and is a regularization parameter, whose theoretical value will be revealed later.

To further understand the added dual-sparse regularizer, we consider SVM, where the loss function can be either the hinge loss (a non-smooth function) or the squared hinge loss (a smooth function) . We first consider the hinge loss, where for . Then the new dual problem is equivalent to

Using variable transformation , the above problem is equivalent to

Changing into the primal form, we have

(6)

where is a max-margin loss with margin given by . It can be understood that adding the regularization in the reduced problem of SVM is equivalent to using a max-margin loss with a smaller margin, which is intuitive because examples become difficult to separate after dimensionality reduction and is consistent with several previous studies that the margin is reduced in the reduced feature space (Blum, 2005; Shi et al., 2012). Similarly for squared hinge loss, the equivalent primal problem is

(7)

where .

Although adding a dual-sparse regularizer is intuitive and can be motivated from previous results, we emphasize that the proposed dual-sparse formulation provides a new perspective and bounding the dual recovery error is a non-trivial task, which is a major contribution of this paper. We first state our main result in Theorem 1 for smooth loss functions.

Theorem 1.

Let be the optimal dual solution to (5). Assume is -sparse with the support set given by . If , then we have

(8)

Furthermore, if is a -smooth loss function 222A function is -smooth if its gradient is -Lipschitz continuous. , we have

(9)
(10)

where is the complement of , and is a vector that only contains the elements of in the set .

Remark 1: The proof is presented in Appendix A. It can be seen that the dual recovery error is proportional to the value of which is dependent on , which we can bound without using any assumption about the data matrix or the optimal dual variable . In contrast, previous bounds (Zhang et al., 2013, 2014; Paul et al., 2013) depend on , which requires the low rank assumption on . In next section, we provide an upper bound of that will allow us to understand how the reduced dimensionality affects the recovery error. Essentially, the results indicate that for random projection, randomized Hadamard transform and random hashing, with a high probability , and thus the recovery error will be scaled as in terms of - the same order of recovery error as in (Zhang et al., 2013, 2014) that assumes low rank of the data matrix.

Remark 2: We would like to make a connection with LASSO for sparse signal recovery. In sparse signal recovery under noise measurements , where denotes the noise in measurements, if a LASSO is solved for the solution, then the regularization parameter is required to be larger than the quantity that depends on the noise in order to have an accurate recovery (Eldar & Kutyniok, 2012). Similarly in our formulation, the added regularization is to counteract the noise in as compared with and the value of is dependent on the noise.

To present the theoretical result on the non-smooth loss functions, we need to introduce restricted eigen-value conditions similar to those used in the sparse recovery analysis for LASSO (Bickel et al., 2009; Xiao & Zhang, 2013). In particular, we introduce the following definition of restricted eigen-value condition.

Definition 2.

Given an integer , we define

We say that

satisfies the restricted eigenvalue condition at sparsity level s if there exist positive constants

and such that

We also define another quantity that measures the restricted eigen-value of , namely

(11)
Theorem 3.

Let be the optimal dual solution to (5). Assume is -sparse with the support set given by . If , then we have

Assume the data matrix satisfies the restricted eigen-value condition at sparsity level and , we have

Remark 3: The proof is included in Appendix B. Compared to smooth loss functions, the conditions that guarantee a small recovery for non-smooth loss functions are more restricted. In next section, we will provide a bound on to further understand the condition of , which essentially implies that .

Last but not least, we provide a theoretical result on the recovery error for the nearly sparse optimal dual variable . We state the result for smooth loss functions. To quantify the near sparsity, we let denote a vector that zeros all entries in except for the top- elements in magnitude and assume satisfies the following condition:

(12)

where . The above condition can be considered as a sub-optimality condition (Boyd & Vandenberghe, 2004) of measured in the infinite norm. For the optimal solution , we have .

Theorem 4.

Let be the optimal dual solution to (5). Assume is nearly -sparse such that (12) holds with the support set of given by . If , then we have

Furthermore, if is a -smooth loss function, we have

(13)
(14)

Remark 4: The proof appears in Appendix C. Compared to Theorem 1 for exactly sparse optimal dual solution, the dual recovery error bound for nearly sparse optimal dual solution is increased by for norm and by for norm.

Finally, we note that with the recovery error bound for the dual solution, we can easily derive an error bound for the primal solution . Below we present a theorem for smooth loss functions. One can easily extend the result to non-smooth loss functions.

Theorem 5.

Let be the recovered primal solution using the optimal dual solution to (5). Assume is -sparse and is a -smooth loss function. If then we have

where

is the maximum singular value of

. Furthermore if has a restricted eigen-value at sparsity level , then

Remark 5: Since is always less than , the second result if the restricted eigen-value condition holds is always better than the first result. With the bound of as revealed later, we can see that the error of scales as in terms of sparsity of , the reduced dimensionality and the magnitude of . A similar order of error bound was established in (Zhang et al., 2014) assuming is -sparse and is approximately low rank. In contrast, we do not assume is approximately low rank.

4 Analysis

In this section, we first provide upper bound analysis of and . To facilitate our analysis, we define

4.1 Bounding

A critical condition in both Theorem 1 and Theorem 3 is . In order to reveal the theoretical value of and its implication for various randomized reduction methods, we need to bound . We first provide a general analysis and then study its implication for various randomized reduction methods separately. The analysis is based on the following assumption, which essentially is indicated by Johnson-Lindenstrauss (JL)-type lemmas.

Assumption 1 (A1).

Let be a linear projection operator where such that for any given with a high probability , we have

where depends on , and possibly .

With this assumption, we have the following theorem regarding the upper bound of .

Theorem 6.

Suppose satisfies Assumption A, then with a high probability we have

where .

Proof.

where we use the fact . Then

Therefore in order to bound , we need to bound for all . We first bound for individual and then apply the union bound. Let and be normalized version of and , i.e., and . Suppose Assumption A is satisfied, then with a probability ,

Similarly with a probability ,

Therefore with a probability , we have

Then applying union bound, we complete the proof.

Next, we discuss four classes of randomized reduction operators, namely random projection, randomized Hadamard transform, random hashing and random sampling, and study the corresponding and their implications for the recovery error.

Random Projection. Random projection has been employed widely for dimension reduction. The projection operator

is usually sampled from sub-Gaussian distributions with mean

and variance

, e.g., (i) Gaussian distribution: , (ii) Rademacher distribution: , (iii) discrete distribution: and . The last two distributions for dimensionality reduction were proposed and analyzed in (Achlioptas, 2003). The following lemma is the general JL-type lemma for with sub-Gaussian entries, which reveals the value of in Assumption A.

Lemma 1.

(Nelson, ) Let

be a random matrix with subGaussian entries of mean

and variance . For any given with a probability , we have

where is some small universal constant.

Randomized Hadamard Transform. Randomized Hadamard transform was introduced to speed-up random projection, reducing the computational time 333refers to the running time of computing . of random projection from to or even . The projection matrix is of the form , where

  • is a diagonal matrix with with equal probabilities.

  • is the Hadamard matrix (assuming is a power of ), scaled by .

  • is typically a sparse matrix that facilities computing . Several choices of are possible (Nelson, ; Ailon & Chazelle, 2009; Tropp, 2011). Below we provide a JL-type lemma for a randomized Hadamard transform with that samples coordinates from with replacement.

Lemma 2.

(Nelson, ) Let be a randomized Hadamard transform with being a random sampling matrix. For any given with a probability , we have

where is some small universal constant.

Remark 6: Compared to random projection, there is an additional factor in . However, it can be removed by applying an additional random projection. In particular, if we let , where is a random sampling matrix with and is a random projection matrix that satisfies Lemma 1, then we have the same order of . Please refer to (Nelson, ) for more details.

Random Hashing. Another line of work to speed-up random projection is random hashing which makes the projection matrix much sparser and takes advantage of the sparsity of feature vectors. It was introduced in (Shi et al., 2009b) for dimensionality reduction and later was improved to an unbiased version by (Weinberger et al., 2009) with some theoretical analysis. Dasgupta et al. (2010) provided a rigorous analysis of the unbiased random hashing. Recently, Kane & Nelson (2014) proposed two new random hashing algorithms with a slightly sparser random matrix . Here we provide a JL-type lemma for the random hashing algorithm in (Weinberger et al., 2009; Dasgupta et al., 2010). Let denote a random hashing function, and

denote a Rademacher random variable, i.e.,

are independent and with equal probabilities. The projection matrix can be written as , where is a diagonal matrix with , and with  444 if , and otherwise.. Under the random matrix , the feature vector is reduced to , where . The following JL-type Lemma is a basic result from (Dasgupta et al., 2010) with a rephrasing.

Lemma 3.

Let be a random hashing matrix. For any given vector such that , for , with a probability , we have

where .

Remark 7: Compared to random projection, there is an additional condition on the feature vector . However, it can be removed by applying an extra preconditioner to before applying the projection matrix , i.e., . Two preconditioners were discussed in (Dasgupta et al., 2010), with one corresponding to duplicating times and scaling it by and another one given by which consists of diagonal blocks of randomized Hadamard matrix, where . The running time of the reduction using the later preconditioner is .

Random Sampling.

Last we discuss random sampling and compare with the aforementioned randomized reduction methods. In fact, the JL-type lemma for random sampling is implicit in the proof of Lemma 2. We make it explicit in the following lemma.

Lemma 4.

Let be a scaled random sampling matrix where samples coordinates with replacement. Then with a probability , we have

Remark 8: Compared with other three randomized reduction methods, there is an additional factor in , which could result in a much larger and consequentially a larger recovery error. That is why the randomized Hadamard transform was introduced to make this additional factor close to a constant.

From the above discussions, we can conclude that with random projection, randomized Hadamard transform and random hashing, with a probability we have,

which essentially indicates that .

4.2 Bounding for non-smooth case

Another condition in Theorem 3 is to require . Since is dependent on the data, we provide an upper bound of to further understand the condition. In the following analysis, we assume . Recall the definition of :

(15)

We provide a bound of below.

The key idea is to use the convex relaxation of . Define . It was shown in (Plan & Vershynin, 2011) that , where is the convex hull of the set . It is not difficult to show that (see the supplement)

Let and . For any fixed , with a probability we can have

where we use

Then by using Lemma 3.3 in (Plan & Vershynin, 2011) about the entropy of and the union bound, we can arrive at the following upper bound for .

Theorem 7.

With a probability , we have

Remark 9: With above result, we can further understand the condition , which amounts to

i.e., where is the restricted condition number of the data matrix.

5 Numerical Experiments

In this section, we provide a case study in support of DSRR and the theoretical analysis, and a demonstration of the application of DSRR to distributed optimization.

A case study on text classification. We use the RCV1-binary data (Lewis et al., 2004) to conduct a case study. The data contains documents and features. We use a splitting for training and testing. The feature vectors were normalized such that the norm is equal to 1. We only report the results using random hashing since it is the most efficient, while other randomized reduction methods (except for random sampling) have similar performance. For the loss function, we use both the squared hinge loss (smooth) and the hinge loss (non-smooth). We aim to examine two questions related to our analysis and motivation (i) how does the value of affect the recovery error? (ii) how does the number of samples affect the recovery error?

We vary the value of among , the value of among , and the value of among . Note that corresponds to the randomized reduction approach without the sparse regularizer. The results averaged over 5 random trials are shown in Figure 2 for the squared hinge loss and in Figure 2 for the hinge loss. We first analyze the results in Figure 2. We can observe that when increases the ratio of decreases indicating that the magnitude of dual variables for the original non-support vectors decreases. This is intuitive and consistent with our motivation. The recovery error of the dual solution (middle) first decreases and then increases. This can be partially explained by the theoretical result in Theorem 1. When the value of becomes larger than a certain threshold making hold, then Theorem 1 implies that a larger will lead to a larger error. On the other hand, when is less than the threshold, the dual recovery error will decrease as increases. In addition, the figures exhibit that the thresholds for larger are smaller which is consistent with our analysis of . The difference between and is because that smaller will lead to larger . In terms of the hinge loss, we observe similar trends, however, the recovery is much more difficult than that for squared hinge loss especially when the value of is small.

Figure 1: Recovery error for squared hinge loss. From left to right: vs , vs , and vs .
Figure 2: Same curves as above but for non-smooth hinge loss.
Figure 1: Recovery error for squared hinge loss. From left to right: vs , vs , and vs .

An application to distributed learning. Although in some cases the solution learned in the reduced space can provide sufficiently good performance, it usually performs worse than the optimal solution that solves the original problem and sometimes the performance gap between them can not be ignored as seen in following experiments. To address this issue, we combine the benefits of distributed learning and the proposed randomized reduction methods for solving big data problems. When data is too large and sits on multiple machines, distributed learning can be employed to solve the optimization problem. In distributed learning, individual machines iteratively solve sub-problems associated with the subset of data on them and communicate some global variables (e.g., the primal solution ) among them. When the dimensionality is very large, the total communication cost could be very high. To reduce the total communication cost, we propose to first solve the reduced data problem and then use the found solution as the initial solution to the distributed learning for the original data.

Below, we demonstrate the effectiveness of DSRR for the recently proposed distributed stochastic dual coordinate ascent (DisDCA) algorithm (Yang, 2013). The procedure is (1) reduce original high-dimensional data to very low dimensional space on individual machines; (2) use DisDCA to solve the reduced problem; (3) use the optimal dual solution to the reduce problem as an initial solution to DisDCA for solving the original problem. We record the running time for randomized reduction in step 1 and optimization of the reduced problem in step 2, and the optimization of the original problem in step 3. We compare the performance of four methods (i) the DSRR method that uses the model of the reduced problem solved by DisDCA to make predictions, (ii) the method that uses the recovered model in the original space, referred to as DSRR-Rec; (iii) the method that uses the dual solution to the reduced problem as an initial solution of DisDCA and runs it for the original problem with or communications (the number of updates before each communication is set to the number of examples in each machine), referred to as DSRR-DisDCA-; and (iv) the distributed method that directly solves the original problem by DisDCA. For DisDCA to solve the original problem, we stop running when its performance on the testing data does not improve. Two data sets are used, namely RCV1-binary, KDD 2010 Cup data. For KDD 2010 Cup data, we use the one available on LibSVM data website. The statistics of the two data sets are summarized in Table 1. The results averaged over 5 trials are shown in Figure 3, which exhibit that the performance of DSRR-DisDCA-1/2 is remarkable in the sense that it achieves almost the same performance of directly training on the original data (DisDCA) and uses much less training time. In addition, DSRR-DisDCA performs much better than DSRR and has small computational overhead.

Name #Training #Testing #Features #Nodes
RCV1 677,399 20,242 47, 236 5
KDD 8,407,752 748,401 29,890,095 10
Table 1: Statistics of datasets
Figure 3: Top: Testing error for different methods. Bottom: Training time for different methods. The value of and the value of . The high-dimensional features are reduced to -dimensional space using random hashing. The loss function is the squared hinge loss.

6 Conclusions

In this paper, we have proposed dual-sparse regularized randomized reduction methods for classification. We presented rigorous theoretical analysis of the proposed dual-sparse randomized reduction methods in terms of recovery error under a mild condition that the optimal dual variable is (nearly) sparse for both smooth and non-smooth loss functions, and for various randomized reduction approaches. The numerical experiments validate our theoretical analysis and also demonstrate that the proposed reduction and recovery framework can benefit distributed optimization by providing a good initial solution.

Acknowledgements

The authors would like to thank the anonymous reviewers for their helpful and insightful comments. T. Yang was supported in part by NSF (IIS-1463988). R. Jin was partially supported by NSF IIS-1251031 and ONR N000141410631.

References

  • Achlioptas (2003) Achlioptas, Dimitris. Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of Computer and System Sciences., 66:671–687, 2003.
  • Agarwal et al. (2011) Agarwal, Alekh, Chapelle, Olivier, Dud k, Miroslav, and Langford, John. A reliable effective terascale linear learning system. CoRR, 2011.
  • Ailon & Chazelle (2009) Ailon, Nir and Chazelle, Bernard. The fast johnson–lindenstrauss transform and approximate nearest neighbors. SIAM J. Comput., 39(1):302–322, 2009.
  • Balcan et al. (2006) Balcan, Maria-Florina, Blum, Avrim, and Vempala, Santosh. Kernels as features: On kernels, margins, and low-dimensional mappings. Machine Learning, 65:79–94, 2006.
  • Bartz et al. (2011) Bartz, Daniel, Hatrick, Kerr, Hesse, Christian W., Müller, Klaus-Robert, and Lemm, Steven. Directional variance adjustment: improving covariance estimates for high-dimensional portfolio optimization. arXiv:1109.3069, 2011.
  • Ben-Hur et al. (2008) Ben-Hur, Asa, Ong, Cheng Soon, Sonnenburg, Sören, Schölkopf, Bernhard, and Rätsch, Gunnar. Support vector machines and kernels for computational biology. PLoS Comput Biology, 4:e1000173, 2008.
  • Bickel et al. (2009) Bickel, Peter J., Ritov, Ya’acov, and Tsybakov, Alexandre B. Simultaneous analysis of lasso and dantzig selector. ANNALS OF STATISTICS, 37(4), 2009.
  • Blum (2005) Blum, Avrim. Random projection, margins, kernels, and feature-selection. In Proceedings of the 2005 International Conference on Subspace, Latent Structure and Feature Selection, volume 3940, pp. 52–68. Springer, 2005.
  • Boyd & Vandenberghe (2004) Boyd, Stephen and Vandenberghe, Lieven. Convex Optimization. Cambridge University Press, 2004.
  • Brank et al. (2002) Brank, Janez, Grobelnik, Marko, Milić-Frayling, Natasa, and Mladenić, D. Feature selection using support vector machines. In Proceedings of the International Conference on Data Mining Methods and Databases for Engineering, Finance, and Other Fields, pp. 84–89, 2002.
  • Dasgupta et al. (2010) Dasgupta, Anirban, Kumar, Ravi, and Sarlós, Tamás. A sparse johnson–lindenstrauss transform. In

    Proceedings of the 42nd ACM symposium on Theory of computing

    , STOC ’10, pp. 341–350, 2010.
  • Drineas et al. (2006) Drineas, Petros, Mahoney, Michael W., and Muthukrishnan, S. Sampling algorithms for l2 regression and applications. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 1127–1136, 2006.
  • Drineas et al. (2008) Drineas, Petros, Mahoney, Michael W., and Muthukrishnan, S. Relative-error cur matrix decompositions. SIAM Journal Matrix Analysis Applications, 30:844–881, 2008.
  • Drineas et al. (2011) Drineas, Petros, Mahoney, Michael W., Muthukrishnan, S., and Sarlós, Tamàs. Faster least squares approximation. Numerische Mathematik, 117(2):219–249, February 2011.
  • Eldar & Kutyniok (2012) Eldar, Yonina C. and Kutyniok, Gitta. Compressed Sensing: Theory and Applications. Compressed Sensing: Theory and Applications. Cambridge University Press, 2012. ISBN 9781107005587.
  • Goldberger et al. (2005) Goldberger, Jacob, Roweis, Sam, Hinton, Geoffrey, and Salakhutdinov, Ruslan. Neighbourhood components analysis. In Advances in Neural Information Processing Systems (NIPS), pp. 513–520, 2005.
  • Guyon et al. (2002) Guyon, Isabelle, Weston, Jason, Barnhill, Stephen, and Vapnik, Vladimir. Gene selection for cancer classification using support vector machines. Machine Learning (ML), 46:389–422, 2002.
  • Jaggi et al. (2014) Jaggi, Martin, Smith, Virginia, Takác, Martin, Terhorst, Jonathan, Krishnan, Sanjay, Hofmann, Thomas, and Jordan, Michael I. Communication-efficient distributed dual coordinate ascent. In Advances in Neural Information Processing Systems (NIPS), pp. 3068–3076, 2014.
  • Jostins & Barrett (2011) Jostins, Luke and Barrett, Jeffrey C. Genetic risk prediction in complex disease. Human Molecular Genetics, 20(R2):R182–R188, 2011.
  • Kane & Nelson (2014) Kane, Daniel M. and Nelson, Jelani. Sparser johnson-lindenstrauss transforms. Journal of the ACM, 61:4:1–4:23, 2014.
  • Kang & Cho (2011) Kang, J., Kugathasan S. Georges M. Zhao H. and Cho, J.H. Improved risk prediction for crohn’s disease with a multi-locus approach. Human Molecular Genetics, 20:2435–2442, 2011.
  • Koltchinskii (2011) Koltchinskii, V. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: École D’Été de Probabilités de Saint-Flour XXXVIII-2008. Ecole d’été de probabilités de Saint-Flour. Springer, 2011. ISBN 9783642221460.
  • Lewis et al. (2004) Lewis, David D., Yang, Yiming, Rose, Tony G., and Li, Fan. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research (JMLR), 5:361–397, 2004.
  • Li et al. (2014) Li, Mu, Andersen, David G, Smola, Alex J, and Yu, Kai. Communication efficient distributed machine learning with the parameter server. In Advances in Neural Information Processing Systems (NIPS), pp. 19–27. 2014.
  • Ma et al. (2014) Ma, Ping, Mahoney, Michael W., and Yu, Bin. A statistical perspective on algorithmic leveraging. In Proceedings of the 31th International Conference on Machine Learning (ICML), pp. 91–99, 2014.
  • Mahoney (2011) Mahoney, Michael W. Randomized algorithms for matrices and data. Foundations and Trends in Machine Learning, 3(2):123–224, 2011.
  • Mahoney & Drineas (2009) Mahoney, Michael W. and Drineas, Petros. Cur matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3):697–702, 2009.
  • Mitchell et al. (2004) Mitchell, Tom M., Hutchinson, Rebecca, Niculescu, Radu S., Pereira, Francisco, Wang, Xuerui, Just, Marcel, and Newman, Sharlene. Learning to decode cognitive states from brain images. Machine Learning, 57(1-2):145–175, 2004.
  • (29) Nelson, Jelani. Johnson-lindenstrauss notes. Technical report, MIT.
  • Paul et al. (2013) Paul, Saurabh, Boutsidis, Christos, Magdon-Ismail, Malik, and Drineas, Petros. Random projections for support vector machines. In

    Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

    , pp. 498–506, 2013.
  • Plan & Vershynin (2011) Plan, Yaniv and Vershynin, Roman. One-bit compressed sensing by linear programming. CoRR, abs/1109.4299, 2011.
  • Rätsch et al. (2005) Rätsch, G., Sonnenburg, S., and Schölkopf, B. RASE: recognition of alternatively spliced exons in c.elegans. Bioinformatics, 21:i369–i377, 2005.
  • R tsch et al. (2005) R tsch, Gunnar, Sonnenburg, S ren, and Sch lkopf, Bernhard. Rase: recognition of alternatively spliced exons in c.elegans. In Proceedings of the International Conference on Intelligent Systems for Molecular Biology (ISMB) (Supplement of Bioinformatics), pp. 369–377, 2005.
  • Sánchez et al. (2013) Sánchez, Jorge, Perronnin, Florent, Mensink, Thomas, and Verbeek, Jakob J. Image classification with the fisher vector: Theory and practice. International Journal of Computer Vision, 105(3):222–245, 2013.
  • Shi et al. (2009a) Shi, Qinfeng, Petterson, James, Dror, Gideon, Langford, John, Smola, Alex, and Vishwanathan, S.V.N. Hash kernels for structured data. Journal of Machine Learning Research (JMLR), 10:2615–2637, 2009a.
  • Shi et al. (2009b) Shi, Qinfeng, Petterson, James, Dror, Gideon, Langford, John, Smola, Alexander J., Strehl, Alexander L., and Vishwanathan, Vishy. Hash kernels. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 496–503, 2009b.
  • Shi et al. (2012) Shi, Qinfeng, Shen, Chunhua, Hill, Rhys, and van den Hengel, Anton. Is margin preserved after random projection? In Proceedings of the International Conference on Machine Learning (ICML), 2012.
  • Simianer et al. (2012) Simianer, Patrick, Riezler, Stefan, and Dyer, Chris. Joint feature selection in distributed stochastic learning for large-scale discriminative training in smt. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pp. 11–21, 2012.
  • Sonnenburg & Franc (2010) Sonnenburg, S ren and Franc, Vojtech. Coffin: A computational framework for linear svms. In Proceedings of the International Conference on Machine Learning (ICML), pp. 999–1006, 2010.
  • Sonnenburg et al. (2007) Sonnenburg, S ren, Schweikert, Gabriele, Philips, Petra, Behr, Jonas, and R tsch, Gunnar. Accurate splice site prediction using support vector machines. BMC Bioinformatics, 8, 2007.
  • Tropp (2011) Tropp, Joel A. Improved analysis of the subsampled randomized hadamard transform. Advances in Adaptive Data Analysis, 3(1-2):115–126, 2011.
  • Weinberger et al. (2009) Weinberger, Kilian Q., Dasgupta, Anirban, Langford, John, Smola, Alexander J., and Attenberg, Josh. Feature hashing for large scale multitask learning. In Proceedings of the International Conference on Machine Learning (ICML), pp. 1113–1120, 2009.
  • Xiao & Zhang (2013) Xiao, Lin and Zhang, Tong. A proximal-gradient homotopy method for the sparse least-squares problem. SIAM Journal on Optimization, 23(2):1062–1091, 2013.
  • Yang (2013) Yang, Tianbao. Trading computation for communication: Distributed stochastic dual coordinate ascent. In Advances in Neural Information Processing Systems (NIPS), pp. 629–637, 2013.
  • Zhang et al. (2013) Zhang, Lijun, Mahdavi, Mehrdad, Jin, Rong, Yang, Tianbao, and Zhu, Shenghuo. Recovering the optimal solution by dual random projection. In COLT, pp. 135–157, 2013.
  • Zhang et al. (2014) Zhang, Lijun, Mahdavi, Mehrdad, Jin, Rong, Yang, Tianbao, and Zhu, Shenghuo. Random projections for classification: A recovery approach. IEEE Transactions on Information Theory (IEEE TIT), 60(11):7300–7316, 2014.

Appendix A Proof of Theorem 1

Let be defined as

Since therefore for any