Characterizing Implicit Bias in Terms of Optimization Geometry

02/22/2018 ∙ by Suriya Gunasekar, et al. ∙ University of Southern California Toyota Technological Institute at Chicago 0

We study the bias of generic optimization methods, including Mirror Descent, Natural Gradient Descent and Steepest Descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We ask the question of whether the global minimum (among the many possible global minima) reached by optimization algorithms can be characterized in terms of the potential or norm, and independently of hyperparameter choices such as step size and momentum.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Implicit bias from the optimization algorithm plays a crucial role in learning deep neural networks as it introduces effective capacity control not directly specified in the objective

(Neyshabur et al., 2015b, a; Zhang et al., 2017; Keskar et al., 2016; Wilson et al., 2017; Neyshabur et al., 2017). In overparameterized models where the training objective has many global minima, optimizing using a specific algorithm, such as gradient descent, implicitly biases

the solutions to some special global minima. The properties of the learned model, including its generalization performance, are thus crucially influenced by the choice of optimization algorithm used. In neural networks especially, characterizing these special global minima for common algorithms such as stochastic gradient descent (SGD) is essential for understanding what the inductive bias of the learned model is and why such large capacity networks often show remarkably good generalization even in the absence of explicit regularization

(Zhang et al., 2017) or early stopping (Hoffer et al., 2017).

Implicit bias from optimization depends on the choice of algorithm, and changing the algorithm, or even changing associated hyperparameter can change the implicit bias. For example, Wilson et al. (2017)

showed that for some standard deep learning architectures, variants of SGD algorithm with different choices of momentum and adaptive gradient updates (AdaGrad and Adam) exhibit different biases and thus have different generalization performance;

Keskar et al. (2016), Hoffer et al. (2017) and Smith (2018) study how the size of the mini-batches used in SGD influences generalization; and Neyshabur et al. (2015a) compare the bias of path-SGD (steepest descent with respect to a scale invariant path-norm) to standard SGD.

It is therefore important to explicitly relate different optimization algorithms to their implicit biases. Can we precisely characterize which global minima different algorithms converge to? How does this depend on the loss function? What other choices including initialization, step-size, momentum, stochasticity, and adaptivity, does the implicit bias depend on? In this paper, we provide answers to some of these questions for simple linear regression and classification models. While neural networks are certainly more complicated than these simple linear models, the results here provide a segue into understanding such biases for more complex models.

For linear models, we already have an understanding of the implicit bias of gradient descent. For underdetermined least squares objective, gradient descent can be shown to converge to the minimum Euclidean norm solution. Recently, Soudry et al. (2017)

studied gradient descent for linear logistic regression. The logistic loss is fundamentally different from the squared loss in that the loss function has no attainable global minima. Gradient descent iterates therefore diverge (the norm goes to infinity), but

Soudry et al.

showed that they diverge in the direction of the hard margin support vector machine solution, and therefore the decision boundary converges to this maximum margin separator.

Can we extend such characterization to other optimization methods that work under different (non-Euclidean) geometries such as mirror descent with respect to some potential, natural gradient descent with respect to a Riemannian metric, and steepest descent with respect to a generic norm? Can we relate the implicit bias to these geometries?

As we shall see, the answer depends on whether the loss function is similar to a squared loss or to a logistic loss. This difference is captured by two family of losses: (a) loss functions that have a unique finite root, like the squared loss and (b) strictly monotone loss functions where the infimum is unattainable, like the logistic loss. For losses with a unique finite root, we study the limit point of the optimization iterates, . For monotone losses, we study the limit direction .

In Section 2 we study linear models with loss functions that have unique finite roots. We obtain a robust characterization of the limit point for mirror descent, and discuss how it is independent of step-size and momentum. For natural gradient descent, we show that the step-size does play a role, but get a characterization for infinitesimal step-size. For steepest descent, we show that not only does step-size affects the limit point, but even with infinitesimal step-size, the expected characterization does not hold. The situation is fundamentally different for strictly monotone losses such as the logistic loss (Section 3) where we do get a precise characterization of the limit direction for generic steepest descent. We also study the adaptive gradient descent method (AdaGrad) Duchi et al. (2011) (Section 3.3) and optimization over matrix factorization (Section 4). Recent studies considered the bias of such methods for least squares problems (Wilson et al., 2017; Gunasekar et al., 2017), and here we study these algorithms for monotone loss functions, obtaining a more robust characterization for matrix factorization problems, while concluding that the implicit bias of AdaGrad depends on initial conditions including step-size even for strict monotone losses.

2 Losses with a Unique Finite Root

We first consider learning linear models using losses with a unique finite root, such as the squared loss, where the loss between a prediction and label is minimized at a unique and finite value of . We assume without loss of generality, that and the unique minimizer is .

Property 1 (Losses with a unique finite root).

For any , a sequence minimizes , i.e., if and only if .

Denote the training dataset with features and labels . The empirical loss (or risk) minimizer of a linear model with parameters is given by,

(1)

We are particularly interested in the case where and the observations are realizable, i.e., . Under these conditions, the optimization problem in eq. (1) is underdetermined and has multiple global minima denoted by Note that the set of global minima is the same for any loss with unique finite root (Property 1), including, e.g., the Huber loss, the truncated squared loss. Which specific global minima do different optimization algorithms reach when minimizing the empirical loss objective ?

2.1 Gradient descent

Consider gradient descent updates for minimizing with step-size sequence and initialization ,

If minimizes the empirical loss in eq. (1), then the iterates converge to the unique global minimum that is closest to initialization in distance, i.e., . This can be easily seen as for any , the gradients are always constrained to the fixed subspace spanned by the data , and thus the iterates are confined to the low dimensional affine manifold . Within this low dimensional manifold, there is a unique global minimizer that satisfies the linear constraints in .

The same argument also extends for updates with instance-wise stochastic gradients, where we use a stochastic estimate

of the full gradient computed from a random subset of instances ,

(2)

Moreover, when initialized with , the implicit bias characterization also extends to the following generic momentum and acceleration based updates,

(3)

where . This includes Nesterov’s acceleration () (Nesterov, 1983) and Polyak’s heavy ball momentum () (Polyak, 1964).

For losses with a unique finite root, the implicit bias of gradient descent therefore depends only on the initialization and not on the step-size or momentum or mini-batch size. Can we get such succinct characterization for other optimization algorithms? That is, characterize the bias in terms of the optimization geometry and initialization, but independent of choices of step-sizes, momentum, and stochasticity.

2.2 Mirror descent

Mirror descent (MD) (Beck & Teboulle, 2003; Nemirovskii & Yudin, 1983) was introduced as a generalization of gradient descent for optimization over geometries beyond the Euclidean geometry of gradient descent. In particular, mirror descent updates are defined for any strongly convex and differentiable potential as

(4)

where is the Bregman divergence (Bregman, 1967) w.r.t. , and is some constraint set for parameters .

We first look at unconstrained optimization where and the update in eq. (4) is equivalent to

(5)

For a strongly convex potential , is called the link function and is invertible. Hence, the above updates are uniquely defined. Also, and are referred as primal and dual variables, respectively.

Examples of potentials for mirror descent include the squared norm , which leads to gradient descent; the entropy potential ; the spectral entropy for matrix valued , where

is the entropy potential on the singular values of

; general quadratic potentials for any positive definite matrix ; and the squared norms for .

From eq. (5), we see that rather than the primal iterates , it is the dual iterates that are constrained to the low dimensional data manifold . The arguments for gradient descent can now be generalized to get the following result.

Theorem 1.

For any loss with a unique finite root (Property 1), any realizable dataset , and any strongly convex potential , consider the mirror descent iterates from eq. (5) for minimizing the empirical loss in eq. (1). For all initializations , if the step-size sequence is chosen such that the limit point of the iterates is a global minimizer of , i.e., , then is given by

(6)

In particular, if we start at (so that ), then we get to , where recall that is the set of global minima for .

The analysis of Theorem 1 can also be extended for special cases of constrained mirror descent (eq. (4)) when is minimized over realizable affine equality constraints.

Theorem 1a.

Under the conditions of Theorem 1, consider constrained mirror descent updates from eq. (4) with realizable affine equality constraints, that is for some and and additionally, with . For all initializations , if the step-size sequence is chosen to asymptotically minimize , i.e., , then .

For example, in exponentiated gradient descent (Kivinen & Warmuth, 1997), which is mirror descent w.r.t , under the explicit simplex constraint , Theorem 1a shows that using uniform initialization , mirror descent will return the the maximum entropy solution .

Let us now consider momentum for mirror descent. There are two possible generalizations of the gradient descent momentum in eq. (3): adding momentum either to primal variables , or to dual variables ,

Dual momentum: (7)
Primal momentum: (8)

where , and for , and are the momentum terms in the primal and dual space, respectively; and are the momentum parameters.

If we initialize at , then even with dual momentum continues to remain in the data manifold. This leads to the following extension of Theorem 1.

Theorem 1b.

Under the conditions in Theorem 1, if initialized at , then the mirror descent updates with dual momentum also converge to (6), i.e., for all , if from eq. (7) converges to , then .

Remark 1.

Following the same arguments, we can show that Theorem 11b also hold when instancewise stochastic gradients defined in eq. (2) are used in place of .

(a) Mirror descent primal
momentum (Example 2)
(b) Natural gradient descent
(Example 3)
(c) Steepest descent w.r.t
(Example 4)
Figure 1: Dependence of implicit bias on step-size and momentum: In , the blue line denotes the set of global minima for the respective examples. In and , is the entropy potential and all algorithms are initialized with so that . denotes the minimum potential global minima we expect to converge to. Mirror descent with primal momentum (Example 2): the global minimum that eq. (8) converges to depends on the momentum parameters—the sub-plots contain the trajectories of eq. (8) for different choices of and . Natural gradient descent (Example 3): for different step-sizes , eq. (9) converges to different global minima. Here, was chosen to be small enough to ensure . (c) Steepest descent w.r.t (Example 4): the global minimum to which eq. (11) converges to depends on . Here , denotes the minimum norm global minimum, and denotes the solution of infinitesimal SD with . Note that even as , the expected characterization does not hold, i.e., .

Let us now look at primal momentum. For general potentials , the dual iterates from the primal momentum can fall off the data manifold and the additional components influence the final solution. Thus, the specific global minimum that the iterates converge to will depend on the values of momentum parameters and step-sizes as demonstrated in the following example.

Example 2.

Consider optimizing with dataset and squared loss using primal momentum updates from eq. (8) for MD w.r.t. the entropy potential . For initialization , Figure 0(a) shows how different choices of momentum change the limit point . Additionally, we show the following:

Proposition 2a.

In Example 2, consider the case where primal momentum is used only in the first step, but and for all . For any , there exists , such that from (8) converges to a global minimum, but not to .

2.3 Natural gradient descent

Natural gradient descent (NGD) was introduced by Amari (1998)

as a modification of gradient descent, wherein the updates are chosen to be the steepest descent direction w.r.t a Riemannian metric tensor

that maps to a positive definite local metric . The updates are given by,

(9)

In many instances, the metric tensor is specified by the Hessian of a strongly convex potential . For example, when the metric over the Riemannian manifold is the KL divergence between distributions and parameterized by , the metric tensor is given by , where the potential is the entropy potential over .

Connection to mirror descent

When for a strongly convex potential , as the step-size goes to zero, the iterates from natural gradient descent in eq. (9) and mirror descent w.r.t in eq. (4) converge to each other, and the common dynamics in the limit is given by,

(10)

Thus, as the step-sizes are made infinitesimal, the limit point of natural gradient descent is also the limit point of mirror descent and hence will be biased towards solutions with minimum divergence to the initialization, i.e., as , .

For general step-sizes , if the potential is quadratic, for some positive definite , we get linear link functions and constant metric tensors , and the natural gradient descent updates (9) are the same as the mirror descent (5). Otherwise the updates in eq. (9) is only an approximation of the mirror descent update .

For natural gradient descent with finite step-size and non-quadratic potentials , the characterization in eq. (6) generally does not hold. We can see this as for any initialization , a finite will lead to for which the dual variable is no longer in the data manifold , and hence will converge to a different global minimum dependent on the step-sizes .

Example 3.

Consider optimizing with squared loss over dataset using the natural gradient descent w.r.t. the metric tensor given by , where , and initialization . Figure 0(b) shows that NGD with different step-sizes converges to different global minima. For a simple analytical example: take one finite step and then follow the continuous time path in eq. (10).

Proposition 3a.

For almost all , .

2.4 Steepest Descent

Gradient descent is also a special case of steepest descent (SD) w.r.t a generic norm (Boyd & Vandenberghe, 2004) with updates given by,

(11)

The optimality of in eq. (11) requires , which is equivalent to,

(12)

Examples of steepest descent include gradient descent, which is steepest descent w.r.t norm and coordinate descent, which is steepest descent w.r.t norm. In general, the update in eq. (11) is not uniquely defined and there could be multiple direction that minimize eq. (11). In such cases, any minimizer of eq. (11) is a valid steepest descent update and satisfies eq. (12).

Generalizing gradient descent, we might expect the limit point of steepest descent w.r.t an arbitrary norm to be the solution closest to initialization in corresponding norm, . This is indeed the case for quadratic norms when eq. 11 is equivalent to mirror descent with . Unfortunately, this does not hold for general norms.

Example 4.

Consider minimizing with dataset and loss using steepest descent updates w.r.t. the norm. The empirical results for this problem in Figure 0(c) clearly show that even for norms where the is smooth and strongly convex, the corresponding steepest descent converges to a global minimum that depends on the step-size. Further, even in the continuous step-size limit of , does not converge to .

Coordinate descent

Steepest descent w.r.t. the norm is called the coordinate descent, with updates:

where denotes the convex hull of the set , and are the standard basis, i.e., when multiple partial derivatives are maximal, we can choose any convex combination of the maximizing coordinates, leading to many possible coordinate descent optimization paths.

The connection between optimization paths of coordinate descent and the regularization path given by, , has been studied by Efron et al. (2004). The specific coordinate descent path where updates are along the average of all optimal coordinates and the step-sizes are infinitesimal is equivalent to forward stage-wise selection, a.k.a. -boosting (Friedman, 2001). When the regularization path is monotone in each of the coordinates, it is identical to this stage-wise selection path, i.e., to a coordinate descent optimization path (and also to the related LARS path) (Efron et al., 2004). In this case, at the limit of and , the optimization and regularization paths, both converge to the minimum norm solution. However, when the regularization path is not monotone, which can and does happen, the optimization and regularization paths diverge, and forward stage-wise selection can converge to solutions with sub-optimal norm. This matches our understanding that steepest descent w.r.t. a norm , in this case the norm might converge to a solution that is not always the minimum norm solution.

2.5 Summary for losses with a unique finite root

For losses with a unique finite root, we characterized the implicit bias of generic mirror descent algorithm in terms of the potential function and initialization. This characterization extends for momentum in the dual space as well as to natural gradient descent in the limit of infinitesimal step-size. We also saw that the characterization breaks for mirror descent with primal momentum and natural gradient descent with finite step-sizes. Moreover, for steepest descent with general norms, we were unable to get a useful characterization even in the infinitesimal step size limit. In the following section, we will see that for strictly monotone losses, we can get a characterization also for steepest descent.

3 Strictly Monotone Losses

We now turn to strictly monotone loss functions where the behavior of the implicit bias is fundamentally different, and as are the situations when the implicit bias can be characterized. Such losses are common in classification problems where and is typically a continuous surrogate of the - loss. Examples of such losses include logistic loss, exponential loss, and probit loss.

Property 2 (Strict monotone losses).

is bounded from below, and , is strictly monotonically decreasing in . Without loss of generality, , and .

We look at classification models that fit the training data with linear decision boundaries with decision rule given by . In many instances of the proofs, we also assume without loss of generality that for all , since for linear models, the sign of can equivalently be absorbed into .

We again look at unregularized empirical risk minimization objective of the form in eq. (1), but now with strictly monotone losses. When the training data is not linearly separable, the empirical objective can have a finite global minimum. However, if the dataset is linearly separable, i.e., , the empirical loss is again ill-posed, and moreover does not have any finite minimizer, i.e, only as . Thus, for any sequence , if , then necessarily diverges to infinity rather than converge, and hence we cannot talk about . Instead, we look at the limit direction

whenever the limit exists. We refer to existence of this limit as convergence in direction. Note that, the limit direction fully specifies the decision rule of the classifier that we care about.

We focus on the exponential loss . However, our results can be extended to loss functions with tight exponential tails, including logistic and sigmoid losses, along the lines of Soudry et al. (2017) and Telgarsky (2013).

3.1 Gradient descent

Soudry et al. (2017) showed that for almost all linearly separable datasets, gradient descent with any initialization and any bounded step-size converges in direction to maximum margin separator with unit norm, i.e., the hard margin support vector machine classifier,

This characterization of the implicit bias is independent of both the step-size as well as the initialization. We already see a fundamentally difference from the implicit bias of gradient descent for losses with a unique finite root (Section 2.1) where the characterization depended on the initialization.

Can we similarly characterize the implicit bias of different algorithms establishing converges in direction and calculating ? Can we do this even when we could not characterize the limit point for losses with unique finite roots? As we will see in the following section, we can indeed answer these questions for steepest descent w.r.t arbitrary norms.

3.2 Steepest Descent

Recall that for squared loss, the limit point of steepest descent depends on the step-size, and we were unable obtain a useful characterization even for infinitesimal step-size and zero initialization. In contrast, for exponential loss, the following theorem provides a crisp characterization of the limit direction of steepest descent as a maximum margin solution, independent of step-size (as long as it is small enough) and initialization. Let denote the dual norm of .

Theorem 5.

For any separable dataset and any norm , consider the steepest descent updates from eq. (12) for minimizing in eq. (1) with the exponential loss . For all initializations , and all bounded step-sizes satisfying , where and is any finite upper bound, the iterates satisfy the following,

In particular, if there is a unique maximum- margin solution , then the limit direction is given by .

A special case of Theorem 5 is for steepest descent w.r.t. the norm, which as we already saw corresponds to coordinate descent. More specifically, coordinate descent on the exponential loss can be thought of as an alternative presentation of AdaBoost (Schapire & Freund, 2012), where each coordinate represents the output of one “weak learner”. Indeed, initially mysterious generalization properties of boosting have been understood in terms of implicit regularization (Schapire & Freund, 2012), and later on AdaBoost with small enough step-size was shown to converge in direction precisely to the maximum margin solution (Zhang et al., 2005; Shalev-Shwartz & Singer, 2010; Telgarsky, 2013), just as guaranteed by Theorem 5. In fact, Telgarsky (2013) generalized the result to a richer variety of exponential tailed loss functions including logistic loss, and a broad class of non-constant step-size rules. Interestingly, coordinate descent with exact line search can result in infinite step-sizes, leading the iterates to converge in a different direction that is not a max--margin direction (Rudin et al., 2004), hence the maximum step-size bound in Theorem 5.

Theorem 5 is a generalization of the result of Telgarsky to steepest descent with respect to other norms, and our proof follows the same strategy as Telgarsky. We first prove a generalization of the duality result of Shalev-Shwartz & Singer (2010): if there is a unit norm linear separator that achieves margin , then for all . By using this lower bound on the dual norm of the gradient, we are able to show that the loss decreases faster than the increase in the norm of the iterates, establishing convergence in a margin maximizing direction.

In relating the optimization path to the regularization path, it is also relevant to relate Theorem 5 to the result by Rosset et al. (2004) that for monotone loss functions and norms, the regularization path also converges in direction to the maximum margin separator, i.e.,  . Although the optimization path and regularization path are not the same, they both converge to the same max-margin separator in the limits of and , for the regularization path and steepest descent optimization path, respectively.

3.3 Adaptive Gradient Descent (AdaGrad)

Adaptive gradient methods, such as AdaGrad (Duchi et al., 2011) or Adam (Kingma & Adam, 2015) are very popular for neural network training. We now look at the implicit bias of the basic (diagonal) AdaGrad.

(13)

where is a diagonal matrix such that,

(14)

AdaGrad updates described above correspond to a pre-conditioned gradient descent, where the pre-conditioning matrix adapts across iterations. It was observed by Wilson et al. (2017) that for neural networks with squared loss, adaptive methods tend to degrade generalization performance in comparison to non-adaptive methods (e.g., SGD with momentum), even when both methods are used to train the network until convergence to a global minimum of training loss. This suggests that adaptivity does indeed affect the implicit bias. For squared loss, by inspection the updates in eq. (13), we do not expect to get a characterization of the limit point that is independent of the step-sizes.

However, we might hope that, like for steepest descent, the situation might be different for strictly monotone losses, where the asymptotic behavior could potentially nullify the initial conditions. Examining the updates in eq. (13), we can see that the robustness to initialization and initial updates depend on whether the matrices diverge or converge: if diverges, then we expect the asymptotic effects to dominate, but if it is bounded, then the limit direction will depend on the initial conditions.

Unfortunately, the following theorem shows that, the components of matrix are bounded, and hence even for strict monotone losses, the initial conditions and step-size will have a non-vanishing contribution to the asymptotic behavior of and hence to the limit direction , whenever it exists. In other words, the implicit bias of AdaGrad does indeed depend on initialization and step-size.

Theorem 6.

For any linearly separable training data , consider the AdaGrad iterates from eq. (13) for minimizing with exponential loss . For any fixed and bounded step-size , and any initialization of and , such that , and ,

4 Gradient descent on the factorized parameterization

Consider the empirical risk minimization in eq. (1) for matrix valued ,

(15)

This is the exact same setting as eq. (1) obtained by arranging and as matrices. We can now study another class of algorithms for learning linear models based on matrix factorization, where we reparameterize as with unconstrained and to get the following equivalent objective,

(16)

Note that although non-convex, eq. (16) is equivalent to eq. (15) with the exact same set of global minima over . Gunasekar et al. (2017) studied this problem for squared loss and noted that gradient descent on the factorization yields radically different implicit bias compared to gradient descent on . In particular, gradient descent on is often observed to be biased towards low nuclear norm solutions, which in turns ensures generalization (Srebro et al., 2005) and low rank matrix recovery (Recht et al., 2010; Candes & Recht, 2009). Since the matrix factorization objective in eq. (16) can be viewed as a two-layer neural network with linear activation, understanding the implicit bias here could provide direct insights into characterizing the implicit bias in more complex neural networks with non-linear activations.

Gunasekar et al. (2017) noted that, the optimization problem in eq. (16) over factorization can be cast as a special case of optimization over p.s.d. matrices with unconstrained symmetric factorization :

(17)

Specifically, in terms of both the objective as well as gradient descent updates, a problem instance of eq. (16) is equivalent to a problem instance of eq. (17) with larger data matrices and loss optimized over larger p.s.d. matrix of the form , where corresponds to the optimization variables in the original problem instance of eq.  (16) and and some p.s.d matrices that are irrelevant for the objective.

Henceforth, we will also consider the symmetric matrix factorization in (17). Let be any full rank initialization, gradient descent updates in are given by,

(18)

with corresponding updates in given by,

(19)
Losses with a unique finite root

For squared loss, Gunasekar et al. (2017) showed that the implicit bias of iterates in eq. (19) crucially depended on both the initialization as well as the step-size . Gunasekar et al. conjectured, and provided theoretical and empirical evidence that gradient descent on the factorization converges to the minimum nuclear norm global minimum, but only if the initialization is infinitesimally close to zero and the step-sizes are infinitesimally small. Li et al. (2017), later proved the conjecture under additional assumption that the measurements satisfy certain restricted isometry property (RIP).

In the case of squared loss, it is evident that for finite step-sizes and finite initialization, the implicit bias towards the minimum nuclear norm global minima is not exact. In practice, not only do we need , but we also cannot initialize very close to zero since zero is a saddle point for eq. (17). The natural question motivated by the results in Section 3 is: for strictly monotone losses, can we get a characterization of the implicit bias of gradient descent for the factorized objective in eq. (17) that is more robust to initialization and step-size?

Strict monotone losses

In the following theorem, we again see that the characterization of the implicit bias of gradient descent for factorized objective is more robust in the case of strict monotone losses.

Theorem 7.

For almost all datasets separable by a p.s.d. linear classifier, consider the gradient descent iterates in eq. (18) for minimizing with the exponential loss and the corresponding sequence of linear predictors in eq. (19). For any full rank initialization and any finite step-size sequence , if asymptotically minimizes , i.e., , and additionally the updates and the gradients converge in direction, then the limit direction is a scaling of a first order stationary point (f.o.s.p) of the following non-convex optimization problem

(20)
Remark 2.

Any global minimum of eq. (20) corresponds to predictor that minimizes the nuclear norm of linear p.s.d. classifier with margin constraints,

(21)

Additionally, in the absence of rank constraints on , all second order stationary points of eq. (20) are global minima for the problem. More general, we expect a stronger result that , which is also the limit direction of , is a minimizer of eq. (21). Showing a stronger result that indeed converges in direction to is of interest for future work.

Here we note that convergence of in direction is necessary for the characterization of implicit bias to be relevant, but in Theorem 7, we require stronger conditions that the gradients also converge in direction. Relaxing this condition is of interest for future work.

Key property

Let us look at exponential loss when converges in direction to, say . Then can be expressed as for some scalar and . Consequently, the gradients will asymptotically be dominated by linear combinations of examples that have the smallest distance to the decision boundary, i.e., the support vectors of . This behavior can be used to show optimality of such that to the first order stationary points of the maximum margin problem in eq. 20.

This idea formalized in the following lemma, which is of interest beyond the results in this paper.

Lemma 8.

For almost all linearly separable datasets , consider any sequence that minimizes in eq. (1) with exponential loss, i.e., . If converges, then for every accumulation point of , where and are the indices of the data points with smallest margin to .

5 Summary

We studied the implicit bias of different optimization algorithms for two families of losses, losses with a unique finite root and strict monotone losses, where the biases are fundamentally different. In the case of losses with a unique finite root, we have a simple characterization of the limit point for mirror descent. But for this family of losses, such a succinct characterization does not extend to steepest descent with respect to general norms. On the other hand, for strict monotone losses, we noticed that the initial updates of the algorithm, including initialization and initial step-sizes are nullified when we analyze the asymptotic limit direction . We show that for steepest descent, the limit direction is a maximum margin separator within the unit ball of the corresponding norm. We also looked at other optimization algorithms for strictly monotone losses. For matrix factorization, we again get a more robust characterization that relates the limit direction to the maximum margin separator with unit nuclear norm. This again, in contrast to squared loss Gunasekar et al. (2017), is independent of the initialization and step-size. However, for AdaGrad, we show that even for strict monotone losses, the limit direction could depend on the initial conditions.

In our results, we characterize the implicit bias for linear models as minimum norm (potential) or maximum margin solutions. These are indeed very special among all the solutions that fit the training data, and in particular, their generalization performance can in turn be understood from standard analyses Bartlett & Mendelson (2003).

Going forward, for more complicated non-linear models, especially neural networks, further work is required in order to get a more complete understanding of the implicit bias. The preliminary result for matrix factorization provides us tools to attempt extensions to multi-layer linear models, and eventually to non-linear networks. Even for linear models, the question of what is the implicit bias is when is optimized with explicitly constraints is an open problem. We believe similar characterizations can be obtained when there are multiple feasible solutions with . We also believe, the results for single outputs considered in this paper can also be extended for multi-output loss functions.

Finally, we would like a more fine grained analysis connecting the iterates along the optimization path of various algorithms to the regularization path, , where an explicit regularization is added to the optimization objective. In particular, our positive characterizations show that the optimization and regularization paths meet at the limit of and , respectively. It would be desirable to further understand the relations between the entire optimization and regularization paths, which will help us understand the non-asymptotic effects from early stopping.

Acknowledgments

The authors are grateful to M.S. Nacson, Y. Carmon, and the anonymous ICML reviewers for helpful comments on the manuscript. The research was supported in part by NSF IIS award 1302662. The work of DS was supported by the Taub Foundation.

References

  • Amari (1998) Amari, S. I. Natural gradient works efficiently in learning. Neural computation, 1998.
  • Bartlett & Mendelson (2003) Bartlett, P. L. and Mendelson, S. Rademacher and Gaussian complexities: Risk bounds and structural results.

    Journal of Machine Learning Research

    , 2003.
  • Beck & Teboulle (2003) Beck, A. and Teboulle, M. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 2003.
  • Boyd & Vandenberghe (2004) Boyd, S. and Vandenberghe, L. Convex optimization. Cambridge university press, 2004.
  • Bregman (1967) Bregman, L. M. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics, 1967.
  • Candes & Recht (2009) Candes, E. J. and Recht, B. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 2009.
  • Duchi et al. (2011) Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011.
  • Efron et al. (2004) Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. Least angle regression. The Annals of statistics, 2004.
  • Friedman (2001) Friedman, J. H.

    Greedy function approximation: a gradient boosting machine.

    Annals of statistics, 2001.
  • Gunasekar et al. (2017) Gunasekar, S., Woodworth, B. E., Bhojanapalli, S., Neyshabur, B., and Srebro, N. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems, pp. 6152–6160, 2017.
  • Hoffer et al. (2017) Hoffer, E., Hubara, I., and Soudry, D. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In NIPS, pp. 1–13, may 2017. URL http://arxiv.org/abs/1705.08741.
  • Keskar et al. (2016) Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2016.
  • Kingma & Adam (2015) Kingma, D. and Adam, J. B. Adam: A method for stochastic optimisation. In International Conference for Learning Representations, volume 6, 2015.
  • Kivinen & Warmuth (1997) Kivinen, J. and Warmuth, M. K. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 1997.
  • Li et al. (2017) Li, Y., Ma, T., and Zhang, H. Algorithmic regularization in over-parameterized matrix recovery. arXiv preprint arXiv:1712.09203, 2017.
  • Muresan & Muresan (2009) Muresan, M. and Muresan, M. A concrete approach to classical analysis, volume 14. Springer, 2009.
  • Nemirovskii & Yudin (1983) Nemirovskii, A. and Yudin, D. Problem complexity and method efficiency in optimization. Wiley, 1983.
  • Nesterov (1983) Nesterov, Y. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, 1983.
  • Neyshabur et al. (2015a) Neyshabur, B., Salakhutdinov, R. R., and Srebro, N. Path-sgd: Path-normalized optimization in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2422–2430, 2015a.
  • Neyshabur et al. (2015b) Neyshabur, B., Tomioka, R., and Srebro, N. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference on Learning Representations, 2015b.
  • Neyshabur et al. (2017) Neyshabur, B., Tomioka, R., Salakhutdinov, R., and Srebro, N. Geometry of optimization and implicit regularization in deep learning. arXiv preprint arXiv:1705.03071, 2017.
  • Polyak (1964) Polyak, B. T. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 1964.
  • Recht et al. (2010) Recht, B., Fazel, M., and Parrilo, P. A. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 2010.
  • Ross (1980) Ross, K. A. Elementary analysis. Springer, 1980.
  • Rosset et al. (2004) Rosset, S., Zhu, J., and Hastie, T. Boosting as a regularized path to a maximum margin classifier. Journal of Machine Learning Research, 2004.
  • Rudin et al. (2004) Rudin, C., Daubechies, I., and Schapire, R. E. The dynamics of adaboost: Cyclic behavior and convergence of margins. Journal of Machine Learning Research, 5(Dec):1557–1595, 2004.
  • Schapire & Freund (2012) Schapire, R. E. and Freund, Y. Boosting: Foundations and algorithms. MIT press, 2012.
  • Shalev-Shwartz & Singer (2010) Shalev-Shwartz, S. and Singer, Y. On the equivalence of weak learnability and linear separability: New relaxations and efficient boosting algorithms. Machine learning, 80(2-3):141–163, 2010.
  • Smith (2018) Smith, Kindermans, L. Don’t Decay the Learning Rate, Increase the Batch Size. In ICLR, 2018.
  • Soudry et al. (2017) Soudry, D., Hoffer, E., and Srebro, N. The implicit bias of gradient descent on separable data. arXiv preprint arXiv:1710.10345, 2017.
  • Srebro et al. (2005) Srebro, N., Alon, N., and Jaakkola, T. S. Generalization error bounds for collaborative prediction with low-rank matrices. In Advances In Neural Information Processing Systems, pp. 1321–1328, 2005.
  • Telgarsky (2013) Telgarsky, M. Margins, shrinkage and boosting. In Proceedings of the 30th International Conference on International Conference on Machine Learning-Volume 28, pp. II–307. JMLR. org, 2013.
  • Wilson et al. (2017) Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems, pp. 4151–4161, 2017.
  • Zhang et al. (2017) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations, 2017.
  • Zhang et al. (2005) Zhang, T., Yu, B., et al. Boosting with early stopping: Convergence and consistency. The Annals of Statistics, 33(4):1538–1579, 2005.

Appendix A Losses with a unique finite root

Let . Let be the derivative of w.r.t first operand , then we can see that, for any ,

(22)

a.1 Proof of Theorem 1-1b

For a strongly convex potential , denote the global optimum with minimum Bregman divergence to the initialization as