Randomized Sketches of Convex Programs with Sharp Guarantees

04/29/2014 ∙ by Mert Pilanci, et al. ∙ berkeley college 0

Random projection (RP) is a classical technique for reducing storage and computational costs. We analyze RP-based approximations of convex programs, in which the original optimization problem is approximated by the solution of a lower-dimensional problem. Such dimensionality reduction is essential in computation-limited settings, since the complexity of general convex programming can be quite high (e.g., cubic for quadratic programs, and substantially higher for semidefinite programs). In addition to computational savings, random projection is also useful for reducing memory usage, and has useful properties for privacy-sensitive optimization. We prove that the approximation ratio of this procedure can be bounded in terms of the geometry of constraint set. For a broad class of random projections, including those based on various sub-Gaussian distributions as well as randomized Hadamard and Fourier transforms, the data matrix defining the cost function can be projected down to the statistical dimension of the tangent cone of the constraints at the original solution, which is often substantially smaller than the original dimension. We illustrate consequences of our theory for various cases, including unconstrained and ℓ_1-constrained least squares, support vector machines, low-rank matrix estimation, and discuss implications on privacy-sensitive optimization and some connections with de-noising and compressed sensing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optimizing a convex function subject to constraints is fundamental to many disciplines in engineering, applied mathematics, and statistics [7, 28]. While most convex programs can be solved in polynomial time, the computational cost can still be prohibitive when the problem dimension and/or number of constraints are large. For instance, although many quadratic programs can be solved in cubic time, this scaling may be prohibitive when the dimension is on the order of millions. This type of concern is only exacerbated for more sophisticated cone programs, such as second-order cone and semidefinite programs. Consequently, it is of great interest to develop methods for approximately solving such programs, along with rigorous bounds on the quality of the resulting approximation.

In this paper, we analyze a particular scheme for approximating a convex program defined by minimizing a quadratic objective function over an arbitrary convex set. The scheme is simple to describe and implement, as it is based on performing a random projection of the matrices and vectors defining the objective function. Since the underlying constraint set may be arbitrary, our analysis encompasses many problem classes including quadratic programs (with constrained or penalized least-squares as a particular case), as well as second-order cone programs and semidefinite programs (including low-rank matrix approximation as a particular case).

An interesting class of such optimization problems arise in the context of statistical estimation. Many such problems can be formulated as estimating an unknown parameter based on noisy linear measurements, along with side information that the true parameter belongs to a low-dimensional space. Examples of such low-dimensional structure include sparse vectors, low-rank matrices, discrete sets defined in a combinatorial manner, as well as algebraic sets, including norms for inducing shrinkage or smoothness. Convex relaxations provide a principled way of deriving polynomial-time methods for such problems [7], and their statistical performance has been extensively studied over the past decade (see the papers [8, 35] for overviews). For many such problems, the ambient dimension of the parameter is very large, and the number of samples can also be large. In these contexts, convex programs may be difficult to solve exactly, and reducing the dimension and sample size by sketching is a very attractive option.

Our work is related to a line of work on sketching unconstrained least-squares problems (e.g., see the papers [15, 22, 6] and references therein). The results given here generalizes this line of work by providing guarantees for the broader class of constrained quadratic programs. In addition, our techniques are convex-analytic in nature, and by exploiting analytical tools from Banach space geometry and empirical process theory [12, 19, 18], lead to sharper bounds on the sketch size as well as sharper probabilistic guarantees. Our work also provides a unified view of both least-squares sketching [15, 22, 6] and compressed sensing [13, 14]. As we discuss in the sequel, various results in compressed sensing can be understood as special cases of sketched least-squares, in which the data matrix in the original quadratic program is the identity.

In addition to reducing computation and storage, random projection is also useful in the context of privacy preservation. Many types of modern data, including financial records and medical tests, have associated privacy concerns. Random projection allows for a sketched version of the data set to be stored, but such that there is a vanishingly small amount of information about any given data point. Our theory shows that this is still possible, while still solving a convex program defined by the data set up to -accuracy. In this way, we sharpen some results by Zhou and Wasserman [37] on privacy-preserving random projections for sparse regression. Our theory points to an interesting dichotomy in privacy sensitive optimization problems based on the trade-off between the complexity of the constraint set and mutual information. We show that if the constraint set is simple enough in terms of a statistical measure, privacy sensitive optimization can be done with arbitrary accuracy.

The remainder of this paper is organized as follows. We begin in Section 2 with a more precise formulation of the problem, and the statement of our main results. In Section 3, we derive corollaries for a number of concrete classes of problems, and provide various simulations that demonstrate the close agreement between the theoretical predictions and behavior in practice. Sections 4 and Section 5 are devoted to the proofs our main results, and we conclude in Section 6. Parts of the results given here are to appear in the conference form at the International Symposium on Information Theory (2014).

2 Statement of main results

We begin by formulating the problem analyzed in this paper, before turning to a statement of our main results.

2.1 Problem formulation

Consider a convex program of the form

(1)

where is some convex subset of , and are a data vector and data matrix, respectively. Our goal is to obtain an -optimal solution to this problem in a computationally simpler manner, and we do so by projecting the problem into , where , via a sketching matrix . In particular, consider the sketched problem

(2)

Note that by the optimality and feasibility of and , respectively, for the original problem (1), we always have . Accordingly, we say that is an -optimal approximation to the original problem (1) if

(3)

Our main result characterizes the number of samples required to achieve this bound as a function of , and other problem parameters.

Our analysis involves a natural geometric object in convex analysis, namely the tangent cone of the constraint set at the optimum , given by

(4)

where denotes the closed convex hull. This set arises naturally in the convex optimality conditions for the original problem (1): any vector defines a feasible direction at the optimal , and optimality means that it is impossible to decrease the cost function by moving in directions belonging to the tangent cone.

We use

to denote the linearly transformed cone

. Our main results involve measures of the “size” of this transformed cone when it is intersected with the Euclidean sphere . In particular, we define Gaussian width of the set via

(5)

where is an i.i.d. sequence of variables. This complexity measure plays an important role in Banach space theory, learning theory and statistics (e.g., [31, 19, 5]).

2.2 Guarantees for sub-Gaussian sketches

Our first main result provides a relation between the sufficient sketch size and Gaussian complexity in the case of sub-Gaussian sketches. In particular, we say that a row of the sketching matrix is -sub-Gaussian if it is zero-mean, and if for any fixed unit vector , we have

(6)

Of course, this condition is satisfied by the standard Gaussian sketch (). In addition, it holds for various other sketching matrices, including random matrices with i.i.d. Bernoulli elements, random matrices with rows drawn uniformly from the rescaled unit sphere, and so on. We say that the sketching matrix is drawn from a -sub-Gaussian ensemble if each row is -sub-Gaussian in the previously defined sense (6).

Theorem 1 (Guarantees for sub-Gaussian projections).

Let be drawn from a -sub-Gaussian ensemble. Then there are universal constants such that, for any tolerance parameter , given a sketch size lower bounded as

(7)

the approximate solution is guaranteed to be -optimal (3

) for the original program with probability at least

.

As will be clarified in examples to follow, the squared width

scales proportionally to the statistical dimension, or number of degrees of freedom in the set

. Consequently, up to constant factors, Theorem 1 guarantees that we can project down to the statistical dimension of the problem while preserving -optimality of the solution.

This fact has an interesting corollary in the context of privacy-sensitive optimization. Suppose that we model the data matrix as being random, and our goal is to solve the original convex program (1) up to -accuracy while revealing as little as possible about the individual entries of . By Theorem 1, whenever the sketch dimension satisfies the lower bound (7), the sketched data matrix suffices to solve the original program up to -accuracy. We can thus ask about how much information per entry of is retained by the sketched data matrix. One way in which to do so is by computing the mutual information per symbol, namely

where the rescaling is chosen since has a total of entries. This quantity was studied by Zhou and Wasserman [37] in the context of privacy-sensitive sparse regression, in which is an -ball, to be discussed at more in length in Section 3.2. In our setting, we have the following more generic corollary of Theorem 1:

Corollary 1.

Let the entries of

be drawn i.i.d. from a distribution with finite variance

. Byusing random Gaussian projections, we can ensure that

(8)

and that the sketched solution is -optimal with probability at least .

Note that the inequality always holds. However, for many problems, we have the much stronger guarantee , in which case the bound (8) guarantees that the mutual information per symbol is vanishing. There are various concrete problems, as discussed in Section 3, for which this type of scaling is reasonable. Thus, for any fixed , we are guaranteed a -optimal solution with a vanishing mutual information per symbol.

Corollary 1 follows by a straightforward combination of past work [37] with Theorem 1. Zhou and Wasserman [37] show that under the stated conditions, for a standard i.i.d. Gaussian sketching matrix , the mutual information rate per symbol is upper bounded as

Substituting in the stated choice of and applying Theorem 1 yields the claim.

2.3 Guarantees for randomized orthogonal systems

A possible disadvantage of using sub-Gaussian sketches is that it requires performing matrix-vector multiplications with unstructured random matrices; such multiplications require time in general. Our second main result applies to sketches based on a randomized orthonormal system (ROS), for which matrix multiplication can be performed much more quickly.

In order to define a randomized orthonormal system, we begin by with an orthonormal matrix with entries . A standard class of such matrices is provided by the Hadamard basis, for which matrix-vector multiplication can be performed in time. Another possible choice is the Fourier basis. Based on any such matrix, a sketching matrix from a ROS ensemble is obtained by sampling i.i.d. rows of the form

where the random vector is chosen uniformly at random from the set of all canonical basis vectors, and is a diagonal matrix of i.i.d. Rademacher variables . With the base matrix chosen as the Hadamard or Fourier basis, then for any fixed vector , the product can be computed in time (e.g., see the paper [2] for details). Hence the sketched data can be formed in time, which scales almost linearly in the input size .

Our main result for randomized orthonormal systems involves the -Gaussian width of the set , given by

(9)

As will be clear in the corollaries to follow, in many cases, the -Gaussian width is equivalent to the ordinary Gaussian width (5) up to numerical constants. It also involves the Rademacher width of the set , given by

(10)

where is an i.i.d. vector of Rademacher variables.

Theorem 2 (Guarantees for randomized orthonormal system).

Let be drawn from a randomized orthonormal system (ROS). Then given a sample size lower bounded as

(11)

the approximate solution is guaranteed to be -optimal (3) for the original program with probability at least .

The required projection dimension (11) for ROS sketches is in general larger than that required for sub-Gaussian sketches, due to the presence of the additional pre-factor . For certain types of cones, we can use more specialized techniques to remove this pre-factor, so that it is not always required. The details of these arguments are given in Section 5, and we provide some illustrative examples of such sharpened results in the corollaries to follow. However, the potentially larger projection dimension is offset by the much lower computational complexity of forming matrix vector products using the ROS sketching matrix.

3 Some concrete instantiations

Our two main theorems are general results that apply to any choice of the convex constraint set . We now turn to some consequences of Theorems 1 and 2 for more specific classes of problems, in which the geometry enters in different ways.

3.1 Unconstrained least squares

We begin with the simplest possible choice, namely , which leads to an unconstrained least squares problem. This class of problems has been studied extensively in past work on least-square sketching [22]; our derivation here provides a sharper result in a more direct manner. At least intuitively, given the data matrix , it should be possible to reduce the dimensionality to the rank of the data matrix , while preserving the accuracy of the solution. In many cases, the quantity is substantially smaller than . The following corollaries of Theorem 1 and 2 confirm this intuition:

Corollary 2 (Approximation guarantee for unconstrained least squares).
  1. Consider the case of unconstrained least squares with :

  2. Given a sub-Gaussian sketch with dimension , the sketched solution is -optimal (3) with probability at least .

  3. Given a ROS sketch with dimension , the sketched solution is -optimal (3) with probability at least .

This corollary improves known results both in the probability estimate and required samples, in particular previous results hold only with constant probability; see the paper [22] for an overview of such results. Note that the total computational complexity of computing

and solving the sketched least squares problem, for instance via QR decomposition

[16], is of the order for sub-Gaussian sketches, and of the order for ROS sketches. Consequently, by using ROS sketches, the overall complexity of computing a -approximate least squares solution with exponentially high probability is . In many cases, this complexity is substantially lower than direct computation of the solution via QR decomposition, which would require operations.

Proof.

Since , the tangent cone is all of , and the set is the image of . Thus, we have

(12)

where the inequality follows from the the fact that the image of is at most -dimensional. Thus, the sub-Gaussian bound in part (a) is an immediate consequence of Theorem 1.

Turning to part (b), an application of Theorem 2 will lead to a sub-optimal result involving . In Section 5.1, we show how a refined argument will lead to bound stated here. ∎

In order to investigate the theoretical prediction of Corollary 2, we performed some simple simulations on randomly generated problem instances. Fixing a dimension , we formed a random ensemble of least-squares problems by first generating a random data matrix with i.i.d. standard Gaussian entries. For a fixed random vector , we then computed the data vector , where the noise vector where . Given this random ensemble of problems, we computed the projected data matrix-vector pairs using Gaussian, Rademacher, and randomized Hadamard sketching matrices, and then solved the projected convex program. We performed this experiment for a range of different problem sizes . For any in this set, we have , with high probability over the choice of randomly sampled . Suppose that we choose a projection dimension of the form , where the control parameter ranged over the interval . Corollary 2 predicts that the approximation error should converge to under this scaling, for each choice of .

Figure 1: Comparison of Gaussian, Rademacher and randomized Hadamard sketches for unconstrained least squares. Each curve plots the approximation ratio versus the control parameter , averaged over trials, for projection dimensions and for problem dimensions and .

Figure 1 shows the results of these experiments, plotting the approximation ratio versus the control parameter . Consistent with Corollary 2, regardless of the choice of , once the projection dimension is a suitably large multiple of , the approximation quality becomes very good.

3.2 -constrained least squares

We now turn a constrained form of least-squares, in which the geometry of the tangent cone enters in a more interesting way. In particular, consider the following -constrained least squares program, known as the Lasso [9, 34]

(13)

It is is widely used in signal processing and statistics for sparse signal recovery and approximation.

In this section, we show that as a corollary of Theorem 1, this quadratic program can be sketched logarithmically in dimension when the optimal solution to the original problem is sparse. In particular, assuming that is unique, we let denote the number of non-zero coefficients of the unique solution to the above program. (When is not unique, we let denote the minimal cardinality among all optimal vectors). Define the

-restricted eigenvalues of the given data matrix

as

(14)
Corollary 3 (Approximation guarantees for -constrained least squares).

Consider the -constrained least squares problem (13):

  1. For sub-Gaussian sketches, a sketch dimension lower bounded by

    (15)

    guarantees that the sketched solution is -optimal (3) with probability at least .

  2. For ROS sketches, a sketch dimension lower bounded by

    (16)

    guarantees that the sketched solution is -optimal (3) with probability at least .

We note that part (a) of this corollary improves the result of Zhou et al. [37], which establishes consistency of Lasso with a Gaussian sketch dimension of the order , in contrast to the requirement in the bound (15). To be more precise, these two results are slightly different, in that the result [37] focuses on support recovery, whereas Corollary 3 guarantees a -accurate approximation of the cost function.

Let us consider the complexity of solving the sketched problem using different methods. In the regime , the complexity of solving the original Lasso problem as a linearly constrained quadratic program via interior point solvers is per iteration (e.g., see Nesterov and Nemirovski [30]). Thus, computing the sketched data and solving the sketched Lasso problem requires operations for sub-Gaussian sketches, and for ROS sketches.

Another popular choice for solving the Lasso problem is to use a first-order algorithm [29]; such algorithms require operations per iteration, and yield a solution that is -optimal within iterations. If we apply such an algorithm to the sketched version for steps, then we obtain a vector such that

Overall, obtaining this guarantee requires operations for sub-Gaussian sketches, and operations for ROS sketches.

Proof.

Let denote the support of the optimal solution . The tangent cone to the -norm constraint at the optimum takes the form

(17)

where is the sign vector of the optimal solution on its support. By the triangle inequality, any vector satisfies the inequality

(18)

If , then by the definition (14), we also have the upper bound , whence

(19)

Note that is a -dimensional Gaussian vector, in which the -entry has variance . Consequently, inequality (19) combined with standard Gaussian tail bounds [19] imply that

(20)

Combined with the bound from Corollary 2, also applicable in this setting, the claim (15) follows.

Turning to part (b), the first lower bound involving follows from Corollary 2. The second lower bound follows as a corollary of Theorem 2 in application to the Lasso; see Appendix A for the calculations. The third lower bound follows by a specialized argument given in Section 5.3.

In order to investigate the prediction of Corollary 3

, we generated a random ensemble of sparse linear regression problems as follows. We first generated a data matrix

by sampling i.i.d. standard Gaussian entries, and then a -sparse base vector by choosing a uniformly random subset of size , and setting its entries to in independent and equiprobably. Finally, we formed the data vector , where the noise vector has i.i.d. entries.

Figure 2: Comparison of Gaussian, Rademacher and randomized Hadamard sketches for the Lasso program (13). Each curve plots the approximation ratio versus the control parameter , averaged over trials, for projection dimensions , problem dimensions , and -constraint radius .

In our experiments, we solved the Lasso (13) with a choice of radius parameter , and set . We then set the projection dimension where is a control parameter, and solved the sketched Lasso for Gaussian, Rademacher and randomized Hadamard sketching matrices. Our theory predicts that the approximation ratio should tend to one as the control parameter increases. The results are plotted in Figure 2, and confirm this qualitative prediction.

3.3 Compressed sensing and noise folding

It is worth noting that various compressed sensing results can be recovered as a special case of Corollary 3—more precisely, one in which the “data matrix” is simply the identity (so that ). With this choice, the original problem (1) corresponds to the classical denoising problem, namely

(21)

so that the cost function is simply . With the choice of constraint set , the optimal solution to the original problem is unique, and can be obtained by performing a coordinate-wise soft-thresholding operation on the data vector . For this choice, the sketched version of the de-noising problem (21) is given by

(22)
Noiseless version:

In the noiseless version of compressed sensing, we have , and hence the optimal solution to the original “denoising” problem (21) is given by , with optimal value

Using the sketched data vector , we can solve the sketched program (22). If doing so yields a -approximation , then in this special case, we are guaranteed that

(23)

which implies that we have exact recovery—that is, .

Noisy versions:

In a more general setting, we observe the vector , where and is some type of observation noise. The sketched observation model then takes the form

so that the sketching matrix is applied to both the true vector and the noise vector . This set-up corresponds to an instance of compressed sensing with “folded” noise (e.g., see the papers [3, 1]), which some argue is a more realistic set-up for compressed sensing. In this context, our results imply that the sketched version satisfies the bound

(24)

If we think of as an approximately sparse vector and as the best approximation to from the -ball, then this bound (24) guarantees that we recover a -approximation to the best sparse approximation. Moreover, this bound shows that the compressed sensing error should be closely related to the error in denoising, as has been made precise in recent work [14].

Let us summarize these conclusions in a corollary:

Corollary 4.

Consider an instance of the denoising problem (21) when .

  1. For sub-Gaussian sketches with projection dimension , we are guaranteed exact recovery in the noiseless case (23), and -approximate recovery (24) in the noisy case, both with probability at least .

  2. For ROS sketches, the same conclusions hold with probability using a sketch dimension

    (25)

Of course, a more general version of this corollary holds for any convex constraint set , involving the Gaussian/Rademacher width functions. In this more setting, the corollary generalizes results by Chandrasekaran et al. [8], who studied randomized Gaussian sketches in application to atomic norms, to other types of sketching matrices and other types of constraints. They provide a number of calculations of widths for various atomic norm constraint sets, including permutation and orthogonal matrices, and cut polytopes, which can be used in conjunction with the more general form of Corollary 4.

3.4 Support vector machine classification

Our theory also has applications to learning linear classifiers based on labeled samples. In the context of binary classification, a labeled sample is a pair

, where the vector represents a collection of features, and is the associated class label. A linear classifier is specified by a function , where is a weight vector to be estimated.

Given a set of labelled patterns , the support vector machine [10, 33] estimates the weight vector by minimizing the function

(26)

In this formulation, the squared hinge loss is used to measure the performance of the classifier on sample , and the quadratic penalty serves as a form of regularization.

By considering the dual of this problem, we arrive at a least-squares problem that is amenable to our sketching techniques. Let be a matrix with as its column, and let be a diagonal matrix and let . With this notation, the associated dual problem (e.g. see the paper [20]) takes the form

(27)

The optimal solution corresponds to a vector of weights associated with the samples: it specifies the optimal SVM weight vector via . It is often the case that the dual solution has relatively few non-zero coefficients, corresponding to samples that lie on the so-called margin of the support vector machine.

The sketched version is then given by

(28)

The simplex constraint in the quadratic program (27), although not identical to an -constraint, leads to similar scaling in terms of the sketch dimension.

Corollary 5 (Sketch dimensions for support vector machines).

Given a collection of labeled samples , let denote the number of samples on the margin in the SVM solution (27). Then given a sub-Gaussian sketch with dimension

(29)

the sketched solution (28) is -optimal with probability at least .

We omit the proof, as the calculations specializing from Theorem 1 are essentially the same as those of Corollary 3. The computational complexity of solving the SVM problem as a linearly constrained quadratic problem is same with the Lasso problem, hence same conclusions apply.

Figure 3: Comparison of Gaussian, Rademacher and randomized Hadamard sketches for the support vector machine (27). Each curve plots the approximation ratio versus the control parameter , averaged over trials, for projection dimensions , and problem dimensions .

In order to study the prediction of Corollary 5

, we generated some classification experiments, and tested the performance of the sketching procedure. Consider a two-component Gaussian mixture model, based on the component distributions

and , where and

are uniformly distributed in

. Placing equal weights on each component, we draw samples from this mixture distribution, and then use the resulting data to solve the SVN dual program (27), thereby obtaining an optimal linear decision boundary specified by the vector . The number of non-zero entries corresponds to the number of examples on the decision boundary, known as support vectors. We then solve the sketched version (28), using either Gaussian, Rademacher or randomized Hadamard sketches, and using a projection dimension scaling as , where is a control parameter. We repeat this experiment for problem dimensions , performing trials for each choice of .

Figure 3 shows plots of the approximation ratio versus the control parameter. Each bundle of curves corresponds to a different problem dimension, and has three curves for the three different sketch types. Consistent with the theory, in all cases, the approximation error approaches one as scales upwards.

It is worthwhile noting that similar sketching techniques can be applied to other optimization problems that involve the unit simplex as a constraint. Another instance is the Markowitz formulation of the portfolio optimization problem [23]. Here the goal is to estimate a vector in the unit simplex, corresponding to non-negative weights associated with each of possible assets, so as to minimize the variance of the return subject to a lower bound on the expected return. More precisely, we let denote a vector corresponding to mean return associated with the assets, and we let be a symmetric, positive semidefinite matrix, corresponding to the covariance of the returns. Typically, the mean vector and covariance matrix are estimated from data. Given the pair , the Markowitz allocation is given by

(30)

Note that this problem can be written in the same form as the SVM, since the covariance matrix can be factorized as . Whenever the expected return constraint is active at the solution, the tangent cone is given by

where is the support of . This tangent cone is a subset of the tangent cone for the SVM, and hence the bounds of Corollary 5 also apply to the portfolio optimization problem.

3.5 Matrix estimation with nuclear norm regularization

We now turn to the use of sketching for matrix estimation problems, and in particular those that involve nuclear norm constraints. Let be a convex subset of the space of all matrices. Many matrix estimation problems can be written in the general form

where is a data vector, and is a linear operator from to . Letting denote the vectorized form of a matrix, we can write for a suitably defined matrix , where . Consequently, our general sketching techniques are again applicable.

In many matrix estimation problems, of primary interest are matrices of relatively low rank. Since rank constraints are typically computationally intractable, a standard convex surrogate is the nuclear norm of matrix, given by the sum of its singular values

(31)

As an illustrative example, let us consider the problem of weighted low-rank matrix approximation, Suppose that we wish to approximate a given matrix by a low-rank matrix of the same dimensions, where we measure the quality of approximation using a weighted Frobenius norm

(32)

where and are the columns of and respectively, and is a vector of non-negative weights. If the weight vector is uniform ( for all ), then the norm

is simply the usual Frobenius norm, a low-rank minimizer can be obtained by computing a partial singular value decomposition of the data matrix

. For non-uniform weights, it is no longer easy to solve the rank-constrained minimization problem. Accordingly, it is natural to consider the convex relaxation

(33)

in which the rank constraint is replaced by the nuclear norm constraint . This program can be written in an equivalent vectorized form in dimension by defining the block-diagonal matrix , as well as the vector whose block is given by . We can then consider the equivalent problem , as well as its sketched version

(34)

Suppose that the original optimum has rank : it then be described using at real numbers. Intuitively, it should be possible to project the original problem down to this dimension while still guaranteeing an accurate solution. The following corollary provides a rigorous confirmation of this intuition:

Corollary 6 (Sketch dimensions for weighted low-rank approximation).

Consider the weighted low-rank approximation problem (33) based on a weight vector with condition number , and suppose that the optimal solution has rank .

  1. For sub-Gaussian sketches, a sketch dimension lower bounded by

    (35)

    guarantees that the sketched solution (34) is -optimal (3) with probability at least .

  2. For ROS sketches, a sketch dimension lower bounded by

    (36)

    guarantees that the sketched solution (34) is -optimal (3) with probability at least .

For this particular application, the use of sketching is not likely to lead to substantial computational savings, since the optimization space remains dimensional in both the original and sketched versions. However, the lower dimensional nature of the sketched data can be still very useful in reducing storage requirements and privacy-sensitive optimization.

Proof.

We prove part (a) here, leaving the proof of part (b) to Section 5.4. Throughout the proof, we adopt the shorthand notation and . As shown in past work on nuclear norm regularization (see Lemma 1 in the paper [27]), the tangent cone of the nuclear norm constraint at a rank matrix is contained within the cone

(37)

For any matrix with , we must have . By definition of the Gaussian width, we then have

Since is a diagonal matrix, the vector has independent entries with maximal variance . Letting denote the matrix formed by segmenting the vector into blocks of length , we have

where we have used the duality between the operator and nuclear norms. By standard results on operator norms of Gaussian random matrices [11], we have , and hence

Thus, the bound (35) follows as a corollary of Theorem 1. ∎

3.6 Group sparse regularization

As a final example, let us consider optimization problems that involve constraints to enforce group sparsity. This notion is a generalization of elementwise sparsity, defined in terms of a partition of the index set into a collection of non-overlapping subsets, referred to as groups. Given a group and a vector , we use to denote the sub-vector indexed by elements of . A basic form of the group Lasso norm [36] is given by

(38)

Note that in the special case that consists of groups, each of size , this norm reduces to the usual -norm. More generally, with non-trivial grouping, it defines a second-order cone constraint [7]. Bach et al. [4] provide an overview of the group Lasso norm (38), as well as more exotic choices for enforcing group sparsity.

Here let us consider the problem of sketching the second-order cone program (SOCP)

(39)

We let denote the number of active groups in the optimal solution —that is, the number of groups for which . For any group , we use to denote the sub-matrix with columns indexed by . In analogy to the sparse RE condition (14), we define the group-sparse restricted eigenvalue .

Corollary 7 (Guarantees for group-sparse least-squares squares).

For the group Lasso program (39) with maximum group size , a projection dimension lower bounded as

(40)

guarantees that the sketched solution is -optimal (3) with probability at least .

Note that this is a generalization of Corollary 3 on sketching the ordinary Lasso. Indeed, when we have groups, each of size , then the lower bound (40) reduces to the lower bound (15). As might be expected, the proof of Corollary 7 is similar to that of Corollary 3. It makes use of some standard results on the expected maxima of -variates to upper bound the Gaussian complexity; see the paper [26] for more details on this calculation.

4 Proofs of main results

We now turn to the proofs of our main results, namely Theorem 1 on sub-Gaussian sketching, and Theorem 2 on sketching with randomized orthogonal systems. At a high level, the proofs consists of two parts. The first part is a deterministic argument, using convex optimality conditions. The second step is probabilistic, and depends on the particular choice of random sketching matrices.

4.1 Main argument

Central to the proofs of both Theorem 1 and 2 are the following two variational quantities:

(41a)
(41b)

where we recall that is the Euclidean unit sphere in , and in equation (41b), the vector is fixed but arbitrary. These are deterministic quantities for any fixed choice of sketching matrix

, but random variables for randomized sketches. The following lemma demonstrates the significance of these two quantities:

Lemma 1.

For any sketching matrix , we have

(42)

Consequently, we see that in order to establish that