Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back

08/15/2016 ∙ by Vitaly Feldman, et al. ∙ 0

In stochastic convex optimization the goal is to minimize a convex function F(x) E_ f∼ D[ f(x)] over a convex set K ⊂ R^d where D is some unknown distribution and each f(·) in the support of D is convex over K. The optimization is commonly based on i.i.d. samples f^1,f^2,...,f^n from D. A standard approach to such problems is empirical risk minimization (ERM) that optimizes F_S(x) 1/n∑_i≤ n f^i(x). Here we consider the question of how many samples are necessary for ERM to succeed and the closely related question of uniform convergence of F_S to F over K. We demonstrate that in the standard ℓ_p/ℓ_q setting of Lipschitz-bounded functions over a K of bounded radius, ERM requires sample size that scales linearly with the dimension d. This nearly matches standard upper bounds and improves on Ω( d) dependence proved for ℓ_2/ℓ_2 setting by Shalev-Shwartz et al. (2009). In stark contrast, these problems can be solved using dimension-independent number of samples for ℓ_2/ℓ_2 setting and d dependence for ℓ_1/ℓ_∞ setting using other approaches. We further show that our lower bound applies even if the functions in the support of D are smooth and efficiently computable and even if an ℓ_1 regularization term is added. Finally, we demonstrate that for a more general class of bounded-range (but not Lipschitz-bounded) stochastic convex programs an infinite gap appears already in dimension 2.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Numerous central problems in machine learning, statistics and operations research are special cases of stochastic optimization from i.i.d. data samples. In this problem the goal is to optimize the value of the expected objective function

over some set given i.i.d. samples of

. For example, in supervised learning the set

consists of hypothesis functions from to and each sample is an example described by a pair

. For some fixed loss function

, an example defines a function from to given by . The goal is to find a hypothesis that (approximately) minimizes the expected loss relative to some distribution over examples: .

Here we are interested in stochastic convex optimization (SCO) problems in which is some convex subset of and each function in the support of is convex over . The importance of this setting stems from the fact that such problems can be solved efficiently via a large variety of known techniques. Therefore in many applications even if the original optimization problem is not convex, it is replaced by a convex relaxation.

A classic and widely-used approach to solving stochastic optimization problems is empirical risk minimization (ERM) also referred to as stochastic average approximation (SAA) in the optimization literature. In this approach, given a set of samples the empirical objective function: is optimized (sometimes with an additional regularization term such as for some ). The question we address here is the number of samples required for this approach to work distribution-independently. More specifically, for some fixed convex body and fixed set of convex functions over , what is the smallest number of samples

such that for every probability distribution

supported on , any algorithm that minimizes given i.i.d. samples from will produce an -optimal solution to the problem (namely, ) with probability at least ? We will refer to this number as the sample complexity of ERM for -optimizing over (we will fix for now).

The sample complexity of ERM for -optimizing over is lower bounded by the sample complexity of -optimizing over , that is the number of samples that is necessary to find an -optimal solution for any algorithm. On the other hand, it is upper bounded by the number of samples that ensures uniform convergence of to . Namely, if with probability , for all ,

then, clearly, any algorithm based on ERM will succeed. As a result, ERM and uniform convergence are the primary tool for analysis of the sample complexity of learning problems and are the key subject of study in statistical learning theory. Fundamental results in VC theory imply that in some settings, such as binary classification and least-squares regression, uniform convergence is also a necessary condition for learnability (

[23, 17]) and therefore the three measures of sample complexity mentioned above nearly coincide.

In the context of stochastic convex optimization the study of sample complexity of ERM and uniform convergence was initiated in a groundbreaking work of Shalev-Shwartz, Shamir, Srebro and Sridharan [18]. They demonstrated that the relationships between these notions of sample complexity are substantially more delicate even in the most well-studied settings of SCO. Specifically, let be a unit ball and be the set of all convex sub-differentiable functions with Lipschitz constant relative to bounded by 1 or, equivalently, for all . Then, known algorithm for SCO imply that sample complexity of this problem is and often expressed as rate of convergence ([14, 17]). On the other hand, Shalev-Shwartz [18] show111The dependence on is not stated explicitly but follows immediately from their analysis. that the sample complexity of ERM for solving this problem with is . The only known upper bound for sample complexity of ERM is and relies only on the uniform convergence of Lipschitz-bounded functions [21, 18].

As can seen from this discussion, the work of Shalev-Shwartz [18] still leaves a major gap between known bounds on sample complexity of ERM (and also uniform convergence) for this basic Lipschitz-bounded setup. Another natural question is whether the gap is present in the popular setup. In this setup is a unit ball (or in some cases a simplex) and for all . The sample complexity of SCO in this setup is ([14, 17]) and therefore, even an appropriately modified lower bound in [18], does not imply any gap. More generally, the choice of norm can have a major impact on the relationship between these sample complexities and hence needs to be treated carefully. For example, for (the reversed) setting the sample complexity of the problem is ([10]) and nearly coincides with the number of samples sufficient for uniform convergence.

1.1 Overview of Results

In this work we substantially strengthen the lower bound in [18] proving that a linear dependence on the dimension is necessary for ERM (and, consequently, uniform convergence). We then extend the lower bound to all setups and examine several related questions. Finally, we examine a more general setting of bounded-range SCO (that is for all ). While the sample complexity of this setting is still low (for example when is an ball) and efficient algorithms are known, we show that ERM might require an infinite number of samples already for .

Our work implies that in SCO, even optimization algorithms that exactly minimize the empirical objective function can produce solutions with generalization error that is much larger than the generalization error of solutions obtained via some standard approaches. Another, somewhat counterintuitive, conclusion from our lower bounds is that, from the point of view of generalization of ERM and uniform convergence, convexity does not reduce the sample complexity in the worst case.

Basic construction: Our basic construction is fairly simple and its analysis is inspired by the technique in [18]. It is based on functions of the form . Note that the maximum operator preserves both convexity and Lipschitz bound (relative to any norm). See Figure 1 for an illustration of such function for .

Figure 1: Basic construction for .

The distribution over the sets

that define such functions is uniform over all subsets of some set of vectors

of size such that for any two district , . Equivalently, each element of is included in with probability independently of other elements in . This implies that if the number of samples is less than then, with probability , at least one of the vectors in (say ) will not be observed in any of the samples. This implies that can be minimized while maximizing (the maximum over the unit ball is ). Note that a function randomly chosen from our distribution includes the term in the maximum operator with probability . Therefore the value of the expected function at is whereas the minimum of is . In particular, there exists an ERM algorithm with generalization error of at least . The details of the construction appear in Sec. 3.1 and Thm. 3.1 gives the formal statement of the lower bound. We also show that, by scaling the construction appropriately, we can obtain the same lower bound for any setup with (see Thm. 3.1).

Low complexity construction: The basic construction relies on functions that require bits to describe and exponential time to compute. Most application of SCO use efficiently computable functions and therefore it is natural to ask whether the lower bound still holds for such functions. To answer this question we describe a construction based on a set of functions where each function requires just bits to describe (there are at most functions in the support of the distribution) and each function can be computed in time. To achieve this we will use that consists of (scaled) codewords of an asymptotically good and efficiently computable binary error-correcting code [12, 22]. The functions are defined in a similar way but the additional structure of the code allows to use at most subsets of to define the functions. Further details of the construction appear in Section 4.

Smoothness: The use of maximum operator results in functions that are highly non-smooth (that is, their gradient is not Lipschitz-bounded) whereas the construction in [18] uses smooth functions. Smoothness plays a crucial role in many algorithms for convex optimization (see [5] for examples). It reduces the sample complexity of SCO in setup to when the smoothness parameter is a constant ([14, 17]). Therefore it is natural to ask whether our strong lower bound holds for smooth functions as well. We describe a modification of our construction that proves a similar lower bound in the smooth case (with generalization error of ). The main idea is to replace each linear function with some smooth function guaranteing that for different vectors and every , only one of and can be non-zero. This allows to easily control the smoothness of . See Figure 2 for an illustration of a function on which the construction is based (for ). The details of this construction appear in Sec. 3.2 and the formal statement in Thm. 3.2.

Figure 2: Construction using 1-smooth functions for .

-regularization: Another important contribution in [18] is the demonstration of the important role that strong convexity plays for generalization in SCO: Minimization of ensures that ERM will have low generalization error whenever is strongly convex (for a sufficiently large ). This result is based on the proof that ERM of a strongly convex Lipschitz function is uniform replace-one stable and the connection between such stability and generalization showed in [4] (see also [19] for a detailed treatment of the relationship between generalization and stability). It is natural to ask whether other approaches to regularization will ensure generalization. We demonstrate that for the commonly used regularization the answer is negative. We prove this using a simple modification of our lower bound construction: We shift the functions to the positive orthant where the regularization terms is just a linear function. We then subtract this linear function from each function in our construction, thereby balancing the regularization (while maintaining convexity and Lipschitz-boundedness). The details of this construction appear in Sec. 3.3 (see Thm. 3.3).

Dependence on accuracy: For simplicity and convenience we have ignored the dependence on the accuracy , Lipschitz bound and radius of in our lower bounds. It is easy to see, that this more general setting can be reduced to the case we consider here (Lipschitz bound and radius are equal to 1) with accuracy parameter . We generalize our lower bound to this setting and prove that samples are necessary for uniform convergence and samples are necessary for generalization of ERM. Note that the upper bound on the sample complexity of these settings is and therefore the dependence on in our lower bound does not match the upper bound for ERM. Resolving this gap or even proving any lower bound is an interesting open problem. Additional details can be found in Section 3.4.

Bounded-range SCO: Finally, we consider a more general class of bounded-range convex functions Note that the Lipschitz bound of 1 and the bound of 1 on the radius of imply a bound of 1 on the range (up to a constant shift which does not affect the optimization problem). While this setting is not as well-studied, efficient algorithms for it are known. For example, the online algorithm in a recent work of Rakhlin and Sridharan [16] together with standard online-to-batch conversion arguments [6], imply that the sample complexity of this problem is for any that is an ball (of any radius). For general convex bodies , the problems can be solved via random walk-based approaches [3, 10] or an adaptation of the center-of-gravity method given in [10]. Here we show that for this setting ERM might completely fail already for being the unit 2-dimensional ball. The construction is based on ideas similar to those we used in the smooth case and is formally described in Sec. 5. See Figure 3 for an illustration of a function used in this construction.

Figure 3: Construction using non-Lipschitz convex functions with range in .

2 Preliminaries

For an integer let

. Random variables are denoted by bold letters, e.g.,

. Given we denote the ball of radius in norm by , and the unit ball by .

For a convex body (i.e., compact convex set with nonempty interior) , we consider problems of the form

where is a random variable defined over some set of convex, sub-differentiable functions on and distributed according to some unknown probability distribution . We denote . For an approximation parameter the goal is to find such that and we call any such an -optimal solution. For an -tuple of functions we denote by .

We say that a point is an empirical risk minimum for an -tuple of functions over , if . In some cases there are many points that minimize and in this case we refer to a specific algorithm that selects one of the minimums of as an empirical risk minimizer. To make this explicit we refer to the output of such a minimizer by .

Given , and a convex function we denote by an arbitrary selection of a subgradient. Let us make a brief reminder of some important classes of convex functions. Let and . We say that a subdifferentiable convex function is in the class

  • of -bounded-range functions if for all , .

  • of -Lipschitz continuous functions w.r.t. , if for all , ;

  • of functions with -Lipschitz continuous gradient w.r.t. , if for all , . this implies

  • of -strongly convex functions w.r.t. , if for all


We will omit from the notation when . .

3 Lower Bounds for Lipschitz-Bounded SCO

In this section we present our main lower bounds for SCO of Lipschitz-bounded convex functions. For comparison purposes we start by formally stating some known bounds on sample complexity of solving such problems. The following uniform convergence bounds can be easily derived from the standard covering number argument ([21, 18]) For , let and let be any distribution supported on functions -Lipschitz on relative to (not necessarily convex). Then, for every and

The following upper bounds on sample complexity of Lipschitz-bounded SCO can be obtained from several known algorithms [14, 18] (see [17] for a textbook exposition for ). For , let . Then, there is an algorithm that given and i.i.d. samples from any distribution supported on , outputs an -optimal solution to over with probability . For , and for , . Stronger results are known under additional assumptions on smoothness and/or strong convexity ([14, 15, 20, 1]).

3.1 Non-smooth construction

We will start with a simpler lower bound for non-smooth functions. For simplicity, we will also restrict . Lower bounds for the general setting can be easily obtained from this case by scaling the domain and desired accuracy(see Thm. 3.4 for additional details).

We will need a set of vectors with the following property: for any distinct , . The Chernoff bound together with a standard packing argument imply that there exists a set with this property of size .

For any subset of we define a function g_V(x) ≐max{1/2, max_w ∈V ¯w, x } , where . See Figure 1 for an illustration. We first observe that is convex and -Lipschitz (relative to ). This immediately follows from being convex and -Lipschitz for every and being the maximum of convex and -Lipschitz functions. Let and we define for defined in eq. (3.1). Let

be the uniform distribution over

. Then for and every set of samples there exists an ERM such that


We start by observing that the uniform distribution over is equivalent to picking the function where is obtained by including every element of with probability randomly and independently of all other elements. Further, by the properties of , for every , and , if and otherwise. For chosen randomly with respect to , we have that with probability exactly . This implies that .

Let be the random samples. Observe that and (the minimum is achieved at the origin ). Now, if then let for any . Otherwise is defined to be the origin . Then by the property of mentioned above, we have that for all , and hence . This means that is a minimizer of .

Combining these statements, we get that, if then there exists an ERM such that and . Therefore to prove the claim it suffices to show that for we have that

This easily follows from observing that for the uniform distribution over subsets of , for every ,

and this event is independent from the inclusion of other elements in . Therefore

In our construction there is a different ERM algorithm that does solve the problem (and generalizes well). For example, the algorithm that always outputs the origin . Therefore it is natural to ask whether the same lower bound holds when there exists a unique minimizer. Shalev-Shwartz [18] show that their lower bound construction can be slightly modified to ensure that the minimizer is unique while still having large generalization error. An analogous modification appears to be much harder to analyze in our construction and it is unclear to us how to ensure uniqueness in our strong lower bounds. A further question in this direction is whether it is possible to construct a distribution for which the empirical minimizer with large generalization error is unique and its value is noticeably (at least by ) smaller than the value of at any point that generalizes well. Such distribution would imply that the solutions that “overfits” can be found easily (for example, in a polynomial number of iterations of the gradient descent).

Other norms:

We now observe that exactly the same approach can be used to extend this lower bound to setting. Specifically, for and we define

It is easy to see that for every , . We can now use the same argument as before with the appropriate normalization factor for points in . Namely, instead of for we consider the values of the minimized functions at . This gives the following generalization of Thm. 3.1. For every let and we define and let be the uniform distribution over . Then for and every set of samples there exists an ERM such that

3.2 Smoothness does not help

We now extend the lower bound to smooth functions. We will for simplicity restrict our attention to but analogous modifications can be made for other norms. The functions that we used in the construction use two maximum operators each of which introduces non-smoothness. To deal with maximum with we simply replace the function with a quadratically smoothed version (in the same way as hinge loss is sometimes replaced with modified Huber loss). To deal with the maximum over all , we show that it is possible to ensure that individual components do not “interact”. That is, at every point , the value, gradient and Hessian of at most one component function are non-zero (value, vector and matrix, respectively). This ensures that maximum becomes addition and Lipschitz/smoothness constants can be upper-bounded easily.

Formally, we define

Now, for , we define h_V(x) ≐∑_w∈V ν(¯w,x -7/8) . See Figure 2 for an illustration. We first prove that is -Lipschitz and 1-smooth. For every and defined in eq. (3.2) we have .


It is easy to see that is convex for every and hence is convex. Next we observe that for every point , there is at most one such that . If then . On the other hand, by the properties of , for distinct we have that . Combining these bounds on distances we obtain that if we assume that and then we obtain a contradiction

From here we can conclude that

This immediately implies that and hence is -Lipschitz.

We now prove smoothness. Given two points we consider two cases. First the simpler case when there is at most one such that either or . In this case and . This implies that the 1-smoothness condition is implied by 1-smoothness of . That is one can easily verify that .

Next we consider the case where for there is such that , for there is such that and . Then there exists a point on the line connecting and such that and . Clearly, . On the other hand, by the analysis of the previous case we have that and . Combining these inequalities we obtain that

From here we can use the proof approach from Thm. 3.1 but with in place of . Let and we define for defined in eq. (3.2). Let be the uniform distribution over . Then for and every set of samples there exists an ERM such that


Let be the random samples. As before we first note that and . Further, for every , if and otherwise. Hence . Now, if then let for some . Then for all , and hence . This means that is a minimizer of and .

Now, exactly as in Thm. 3.1, we can conclude that with probability . ∎

3.3 Regularization does not help

Next we show that the lower bound holds even with an additional regularization term for positive . (Note that if then the resulting program is no longer 1-Lipschitz relative to . Any constant can be allowed for setup). To achieve this we shift the construction to the positive orthant (that is such that for all ). In this orthant the subgradient of the regularization term is simply where is the all ’s vector. We can add a linear term to each function in our distribution that balances this term thereby reducing the analysis to non-regularized case. More formally, we define the following family of functions. For , h_V^λ(x) ≐h_V(x-¯1/d) - λ¯1,x. Note that over , is -Lipschitz for . We now state and prove this formally.

Let and for a given , we define for defined in eq. (3.3). Let be the uniform distribution over . Then for and every set of samples there exists such that

  • ;


Let be the random samples. We first note that and min_x∈K(F_S(x) + λ∥x∥_1) &= min_x∈K∑_i∈[n] h_V_ix-¯1d- λ¯1,x+ λ∥x∥_1
&≥min_x∈K∑_i∈[n] h_V_ix-¯1d≥0 .

Further, for every , is in the positive orthant and in . Hence . We can therefore apply the analysis from Thm. 3.2 to obtain the claim. ∎

3.4 Dependence on

We now briefly consider the dependence of our lower bound on the desired accuracy. Note that the upper bound for uniform convergence scales as .

We first observe that our construction implies a lower bound of for uniform convergence nearly matching the upper bound (we do this for the simpler non-smooth setting but the same applies to other setting we consider). Let and we define for defined in eq. (3.1). Let be the uniform distribution over . Then for any and and every set of samples there exists a point such that


For every ,

where is the indicator variable of being in . If for some , then we will obtain a point that violates the uniform convergence by . For every ,

is distributed according to the binomial distribution. Using a standard approximation of the partial binomial sum up to

, we obtain that for some constant , the probability that this sum is is at least

Now, using independence between different , we can conclude that, for , the probability that there exists for which uniform convergence is violated is at least

A natural question is whether the dependence also holds for ERM. We could not answer it and prove only a weaker lower bound. For completeness, we also make this statement for general radius and Lipschitz bound . For and , let and we define for defined in eq. (3.1). We define the random variable as a random subset of obtained by including each element of with probability randomly and independently. Let be the probability distribution of the random variable . Then for and every set of samples there exists an ERM such that


By the same argument as in the proof of Thm. 3.1 we have that: For every , and , if and otherwise. For chosen randomly with respect to , we have that with probability . This implies that . Similarly, and .

Therefore, if then there exists an ERM such that and . For the distribution and every ,

and this event is independent from the inclusion of other elements in (where we used that for ). Therefore

4 Lower Bound for Low-Complexity Functions

We will now demonstrate that our lower bounds hold even if one restricts the attention to functions that can be computed efficiently (in time polynomial in ). For this purpose we will rely on known constructions of binary linear error-correcting codes. We describe the construction for non-smooth setting but analogous versions of other constructions can be obtained in the same way.

We start by briefly providing the necessary background about binary codes. For two vectors let denote the Hamming distance between the two vectors. We say that a mapping is a binary error-correcting code if has distance at least , can be computed in time and there exists an algorithm that for every such that for some , finds such in time (note that such is unique).

Given code , for every , we define a function g_j(x) ≐max{1-r2d, max_w ∈W_j ¯w, x } , where . As before, we note that is convex and -Lipschitz (relative to ). Let be a code. Let and we define for defined in eq. (4). Let be the uniform distribution over . Then for every , can be computed in time . Further, for and every set of samples there exists an ERM such that


Let . For every distinct , . Therefore, by the definition of , for every , if and otherwise. Now for , let denote the number of indices , such where . For there are exactly indices , such that . This means that for a function chosen randomly with respect to , we have that with probability exactly . This implies that .

Let be any set of points from . Observe that and (the minimum is achieved at the origin ). Now, for let denote the vector such that if and , otherwise. Clearly, . Let and let