Deep neutral networks have recently boosted the notion of “deep learning from data,” with field-changing performance improvements reported in numerous machine learning and artificial intelligence tasks[21, 13]. Despite their widespread use as well as numerous recent contributions, our understanding of how and why neural networks (NNs) achieve this success remains limited. While their expressivity (expressive power) has been well argued [37, 38], the research focus has shifted toward addressing the computational challenges of training such models and understanding their generalization behavior.
From the vantage point of optimization, training deep NNs requires dealing with extremely high-dimensional and non-convex problems, which are NP-hard in the worst case. It has been shown that even training a two-layer NN of three nodes is NP-complete 2]. It is therefore not clear whether and how we can provably yet efficiently train a NN to global optimality.
Nevertheless, as often evidenced by empirical tests, these NN architectures can be ‘successfully’ trained by means of simple local search heuristics, such as ‘plain-vanilla’ (stochastic) (S) gradient descent (GD) on real or randomly generated data. Considering the over-parameterized setting in particular, where the NNs have far more parameters than training samples, SGD can often successfully train these networks while exhibiting favorable generalization performance without overfitting. As an example, the celebrated VGG19 net with 20 million parameters trained on the CIFAR-10 dataset of 50 thousand data samples achieves state-of-the-art classification accuracy, and also generalizes well to other datasets . In addition, training NNs by e.g., adding noise to the training samples , or to the (stochastic) gradients during back-propagation , has well-documented merits in training with enhancing generalization performance, as well as in avoiding bad local minima . In this contribution, we take a further step toward understanding the analytical performance of NNs, by providing fundamental insights into the optimization landscape and generalization capability of NNs trained by means of SGD with properly injected noise.
For concreteness, we address these challenges in a binary classification setting, where the goal is to train a two-layer ReLU network on linearly separable data39], the fundamental question we target here is whether and how one can efficiently train a ReLU network to global optimality, despite the presence of infinitely many local minima, maxima, and saddle points . Separable data have also been used in recent works [45, 23, 29, 26, 5, 48]. The motivation behind employing separable data is twofold. They can afford a zero training loss, and distinguish whether a NN is successfully trained or not (as most loss functions for training NNs are non-convex, it is in general difficult to check its global optimum). In addition, separable data enable improvement of the plain-vanilla SGD by leveraging the power of random noise in a principled manner, so that the modified SGD algorithm can provably escape local minima and saddle points efficiently, and converge to a global minimum in a finite number of non-zero updates. We further investigate the generalization capability of successfully trained ReLU networks leveraging compression bounds . Thus, the binary classification setting offers a favorable testbed for studying the effect of training noise on avoiding overfitting when learning ReLU networks. Although the focus of this paper is on two-layer networks, our novel algorithm and theoretical results can shed light on developing reliable training algorithms for as, well as on, understanding generalization of deep networks.
In a nutshell, the main contributions of the present work are:
A simple SGD algorithm that can provably escape local minima and saddle points to efficiently train any two-layer ReLU network to attain global optimality;
Theoretical and empirical evidence supporting the injection of noise during training NNs to escape bad local minima and saddle points; and
Tight generalization error bounds and guarantees for (possibly over-parameterized) ReLU networks optimally trained with the novel SGD algorithm.
The remainder of this paper is structured as follows. Section 2 reviews related contributions. Section 3 introduces the binary classification setting, and the problem formulation. Section 4 presents the novel SGD algorithm, and establishes its theoretical performance. Section 5 deals with the generalization behavior of ReLU networks trained with the novel SGD algorithm. Numerical tests on synthetic data and real images are provided in Section 6. The present paper is concluded with research outlook in Section 7, while technical proofs of the main results are delegated to the Appendix.
Lower- (upper-)case boldface letters denote vectors (matrices), e.g.,(). Calligraphic letters are reserved for sets, e.g. , with the exception of
representing some probability distribution. The operationreturns the largest integer no greater than the given number , the cardinality counts the number of elements in set , and denotes the Euclidean norm of .
2 Related Work
As mentioned earlier, NN models have lately enjoyed great empirical success in numerous domains [21, 13, 53]. Many contributions have been devoted to explaining such a success; see e.g., [4, 5, 18, 12, 23, 56, 35, 24, 26, 44, 19, 48, 52, 51, 10, 25, 40, 49, 17]. Recent research efforts have focused on the expressive ability of deep NNs , and on the computational tractability of training such models [43, 5]. In fact, training NNs is NP-hard in general, even for small and shallow networks [14, 2]. Under various assumptions (e.g., Gaussian data, and a sufficiently large number of hidden units) as well as different models however, it has been shown that local search heuristics such as (S)GD can efficiently learn two-layer NNs with quadratic or ReLU activations .
Another line of research has studied the landscape properties of various loss functions for learning NNs; see e.g. [18, 56, 23, 5, 55, 33, 32, 51, 25, 36]. Generalizing the results for the loss [18, 56], it has been proved that deep linear networks with arbitrary convex and differentiable losses have no sub-optimal (a.k.a. bad) local minima, that is all local minima are global, when the hidden layers are at least as wide as the input or the output layer . For nonlinear NNs, most results have focused on learning shallow networks. For example, it has been shown that there are no bad local minima in learning two-layer networks with quadratic activations and the loss, provided that the number of hidden neurons exceeds twice that of inputs . Focusing on a binary classification setting,  demonstrated that despite the non-convexity present in learning one-hidden-layer leaky ReLU networks with a hinge loss criterion, all critical points are global minima if the data are linearly separable. Thus, SGD can efficiently find a global optimum of a leaky ReLU network. On the other hand, it has also been shown that there exist infinitely many bad local optima in learning even two-layer ReLU networks under mild conditions; see e.g., [50, Theorem 6], [5, Thm. 8], . Interestingly,  provided a complete description of all sub-optimal critical points in learning two-layer ReLU networks with a hinge loss on separable data. Yet, it remains unclear whether and how one can efficiently train even a single-hidden-layer ReLU network to global optimality.
Recent efforts have also been centered on understanding generalization behavior of deep NNs by introducing and/or studying different complexity measures. These include Rademacher complexity, uniform stability, and spectral complexity; see  for a recent survey. However, the obtained generalization bounds do not account for the underlying training schemes, namely optimization methods. As such, they do not provide tight guarantees for generalization performance of (over-parameterized) networks trained with iterative algorithms . Even though recent work suggested an improved generalization bound by optimizing the PAC-Bayes bound of an over-parameterized network in a binary classification setting , this result is meaningful only when the optimization succeeds. Leveraging standard compression bounds, generalization guarantees have been derived for two-layer leaky ReLU networks trained with plain-vanilla SGD . But this bound does not generalize to ReLU networks, due to the challenge and impossibility of using plain-vanilla SGD to train ReLU networks to global optimum.
3 Problem Formulation
Consider a binary classification setting, in which the training set comprises data sampled i.i.d. from some unknown distribution over , where without loss of generality we assume and . We are interested in the linearly separable case, in which there exists an optimal linear classifier vector such that . To allow for affine classifiers, a “bias term” can be appended to the classifier vector by augmenting all data vectors with an extra component of accordingly.
We deal with single-hidden-layer NNs having scalar inputs, hidden neurons, and a single output (for binary classification). The overall input-output relationship of such a two-layer NN is
which maps each input vector to a scalar output by combining nonlinear maps of linearly projected renditions of , effected via the ReLU activation . Clearly, due to the non negativity of ReLU outputs, one requires at least hidden units so that the output can take both positive and negative values to signify the ‘positive’ and ‘negative’ classes. Here, stacks up the weights of the links connecting the input to the -th hidden neuron, and is the weight of the link from the -th hidden neuron to the output. Upon defining and , which are henceforth collectively denoted as for brevity, one can express in a compact matrix-vector representation as
where the ReLU activation should be understood entry-wise when applied to a vector .
Given our NN described by and adopting a hinge loss criterion , we define the empirical loss as the average loss of over the training set , that is
With the output , we construct a binary classifier as , where the sign function if , and otherwise. For this classifier, the training error (a.k.a. misclassification rate) over is
where denotes the indicator function taking value if the argument is true, and otherwise.
yields a non-zero loss) for a ReLU network with two hidden neurons (corresponding to the hyperplanes) (left blue line in each plot, with ) and (right blue line, with ). The arrows point to the positive cone of each hyperplane, namely for . The training dataset contains samples, two of which belong to class ‘’ (colored red) and one of which belong to class ‘’ (colored black). Data points with a non-zero classification error (hence non-zero loss) must lie in the negative cone of all hyperplanes.
In this paper, we fix the second layer of network to be some constant vector given a priori, with at least one positive and at least one negative entry. Therefore, training the ReLU network boils down to learning the weight matrix only. As such, the network is henceforth denoted by , and the goal is to solve the following optimization problem
where . Evidently for separable data and the ReLU network considered in this paper, it must hold that . Due to piecewise linear (non-smooth) ReLU activations, becomes non-smooth. It can be further shown that is non-convex (e.g., [5, Proposition 5.1]), which indeed admits infinitely many (sub-optimal) local minima [5, Thm. 8].
Interestingly though, it is possible to provide an analytical characterization of all sub-optimal solutions. Specifically,
at any critical point 111The critical point for a general non-convex and non-smooth function is defined invoking the Clarke sub-differential (see e.g., , ). Precisely, consider a function , which is locally Lipschitz around , and differentiable on , with being a set of Lebesgue measure zero. Then the convex hull of the set of limits of the form , where as , i.e.,
Expressed differently, if data pair yields a non-zero loss at a critical point , the ReLU output must vanish at all hidden neurons. Building on this observation, we say that a critical point of is sub-optimal if it obeys simultaneously the following two conditions: i) for some data sample , and ii) for which it holds that . According to these two defining conditions, the set of all sub-optimal critical points includes different local minima, as well as all maxima and saddle points; see Figure 1 for an illustration. It is also clear from Figure 1 that the two conditions in certain cases cannot be changed by small perturbations on , suggesting that there are in general infinitely many sub-optimal critical points. Therefore, optimally training even such a single-hidden-layer ReLU network is indeed challenging.
Consider minimizing by means of plain-vanilla SGD with constant learning rate , as
with the (sub-)gradient of the hinge loss at a randomly sampled datum given by
where is a diagonal matrix holding entries of vector on its diagonal, and the indicator function applied to is understood entry-wise. For any sub-optimal critical point incurring a nonzero loss for some , it can be readily deduced that .
Following the convention , we say that a ReLU is active if its output is non-zero, and inactive otherwise. Furthermore, we denote the state of per -th ReLU by its activity indicator function . In words, there exists always some data sample(s) for which all hidden neurons become inactive at a sub-optimal critical point. This is corroborated by the fact that under some conditions, plain-vanilla SGD converges to a sub-optimal local minimum with high probability . It will also be verified by our numerical tests in Section 6, that SGD can indeed get stuck in sub-optimal local minima when training ReLU networks.
4 Main Results
In this section, we present our main results that include a modified SGD algorithm and theory for efficiently training single-hidden-layer ReLU networks to global optimality. As in the convergence analysis of the Perceptron algorithm (see e.g., , [41, Chapter 9]), we define an update at iteration as non-zero or effective if the corresponding (modified) stochastic gradient is non-zero, or equivalently, whenever one has .
As explained in Section 3, plain-vanilla SGD iterations for minimizing can get stuck in sub-optimal critical points. Recall from (9) that whenever this happens, it must hold that for some data sample , or equivalently for all . To avoid being trapped in these points, we will endow the algorithm with a non-zero ‘(sub-)gradient’ even at a sub-optimal critical point, so that the algorithm will be able to continue updating, and will have a chance to escape from sub-optimal critical points. If successful, then when the algorithm converges, it must hold that for all data samples (cf. (9)), or for all , thanks to linear separability of the data. This in agreement with the definition of the hinge loss function satisfies that in (4), which guarantees that the algorithm converges to a global optimum. Two critical questions arise at this point: Q1) How can we endow a non-zero ‘(sub-)gradient’ based search direction even at a sub-optimal critical point, while having the global minima as limiting points of the algorithm? and Q2) How is it possible to guarantee convergence?
Question Q1) can be answered by ensuring that at least one ReLU is active at a non-optimal point. Toward this objective, motivated by recent efforts in escaping saddle points , , , we are prompted to add a zero-mean random noise vector to , namely the input vector to the activity indicator function of all ReLUs. This would replace in the subgradient (cf. (9)) with at every iteration. In practice, Gaussian additive noise
with sufficiently large varianceworks well.
Albeit empirically effective in training ReLU networks, SGD with such architecture-agnostic injected noise into all ReLU activity indicator functions cannot guarantee convergence in general, or convergence is difficult or even impossible to establish. We shall take a different route to bypass this hurdle here, which will lead to a simple algorithm provably convergent to a wanted global optimum in a finite number of non-zero updates. This result holds regardless of the data distribution, initialization, network size, or the number of hidden neurons. Toward, to ensure convergence of our modified SGD algorithm, we carefully design the noise injection process by maintaining at least one non-zero ReLU activity indicator variable at every non-optimal critical point.
For the picked data sample per iteration , we inject Gaussian noise into the -th ReLU activity indicator function in the SGD update of (9), if and only if the corresponding quantity holds, and we repeat this for all neurons .
Interestingly, the noise variance , admits simple choices, so long as it is selected sufficiently large matching the size of the corresponding summands . We will build up more intuition and highlight the basic principle behind such a noise injection design shortly in Section 4.2, along with our formal convergence analysis. For implementation purposes, we summarize the novel SGD algorithm with randomly perturbed ReLU activity indicator functions in Algorithm 1. As far as stopping criterion is concerned, it is safe to conclude that the algorithm has converged, if there has been no non-zero update for a succession of say, iterations, where is some fixed large enough integer. This holds with high probability, which depends on , and (), where the latter denotes the number of neurons with (). We have the following result, whose proof is provided in Appendix Appendix D.4.
Let for all neurons , and all iterations , and consider cycling deterministically through .
If there is no non-zero update after a succession of iterations, then Algorithm 1 converges to a global optimum of with probability at least ,
where is the cumulative density function of the standardized Gaussian distribution
is the cumulative density function of the standardized Gaussian distribution.
With the carefully designed noise injection rule, our algorithm constitutes a non-trivial generalization of the Perceptron or plain-vanilla SGD algorithms to learn ReLU networks. Implementing Algorithm 1 is as easy as plain-vanilla SGD, requiring almost negligible extra computation overhead. Both numerically and analytically, we will demonstrate the power of our principled noise injection into partial ReLU activity indicator functions, as well as establish the optimality, efficiency, and generalization performance of Algorithm 1 in learning two-layer (over-parameterized) ReLU networks on linearly separable data.
It is worth remaking that the random (Gaussian) noise in our proposal is solely added to the ReLU activity indicator functions, rather than to any of the hidden neurons. This is evident from the first indicator function being the (sub)derivative of a hinge loss, in Step 5 of Algorithm 1, which is kept as it is in the plain-vanilla SGD, namely it is not affected by the noise. Moreover, our use of random noise in this way distinguishes itself from those in the vast literature for evading saddle points (see e.g., , , , ), which simply add noise to either the iterates or to the (stochastic) (sub)gradients. This distinction endows our approach with the unique capability of also escaping local minima (in addition to saddle points). To the best of our knowledge, our approach is the first of its kind in provably yet efficiently escaping local minima under suitable conditions.
Compared with previous efforts in learning ReLU networks (e.g., , , , , , ), our proposed Algorithm 1 provably converges to a global optimum in a finite number of non-zero updates, without any assumptions on the data distribution, training/network size, or initialization. This holds even in the presence of exponentially many local minima and saddle points. To the best of our knowledge, Algorithm 1 provides the first solution to efficiently train such a single-hidden-layer ReLU network to global optimality with a hinge loss, so long as the training samples are linearly separable. Generalizations to other objective functions based on e.g., the -hinge loss and the smoothed hinge loss (a.k.a. polynomial hinge loss) , as well as to multilayer ReLU networks are possible, and they are left for future research.
4.2 Convergence analysis
In this section, we analyze the convergence of Algorithm 1 for learning single-hidden-layer ReLU networks with a hinge loss criterion on linearly separable data, namely for minimizing in (4). Recall since we only train the first layer having the second layer weight vector fixed a priori, we can assume without further loss of generality that entries of are all non-zero. Otherwise, one can exclude the corresponding hidden neurons from the network, yielding an equivalent reduced-size NN whose second layer weight vector has all its entries non-zero.
Before presenting our main convergence results for Algorithm 1, we introduce some notation. To start, let () be the index set of data samples belonging to the ‘positive’ (‘negative’) class, namely whose (). It is thus self-evident that and hold under our assumptions. Putting our work in context, it is useful to first formally summarize the landscape properties of the objective function , which can help identify the challenges in learning ReLU networks.
Function has the following properties: i) it is non-convex, and ii) for each sub-optimal local minimum (that incurs a non-zero loss), there exists (at least) a datum for which all ReLUs become inactive.
The proof of Property i) in Proposition 2 can be easily adapted from that of [5, Proposition 5.1], while Property ii) is just a special case of [23, Thm. 5] for a fixed ; hence they are both omitted in this paper.
We will provide an upper bound on the number of non-zero updates that Algorithm 1 performs until no non-zero update occurs after within a succession of say, e.g. iterations (cf. (10)), where is a large enough integer. This, together with the fact that all sub-optimal critical points of are not limiting points of Algorithm 1 due to the Gaussian noise injection with a large enough variance at every iteration, will guarantee convergence of Algorithm 1 to a global optimum of . Specifically, the main result is summarized in the following theorem.
Theorem 1 (Optimality).
If all rows of the initialization satisfy for any constant , and the second layer weight vector is kept fixed with both positive and negative (but non-zero) entries, then Algorithm 1 with some constant step size converges to a global minimum of after performing at most non-zero updates, where for it holds that
In particular, if , then Algorithm 1 converges to a global optimum after at most non-zero updates.
Regarding Theorem 1, a couple of observations are of interest. The developed Algorithm 1 converges to a globally optimal solution of the non-convex optimization (4) within a finite number of non-zero updates, which implicitly corroborates the ability of Algorithm 1 to escape sub-optimal local minima, as well as saddle points. This holds regardless of the underlying data distribution , the number of training samples, the number of hidden neurons, or even the initialization . It is also worth highlighting that the number of non-zero updates does not depend on the dimension of input vectors, but it scales with (in the worst case), and it is inversely proportional to the step size . Recall that the worst-case bound for SGD learning of leaky-ReLU networks with initialization is [5, Thm. 2]
where again, denotes an optimal linear classifier obeying . Clearly, the upper bound above does not depend on . This is due to the fact that the loss function corresponding to learning leaky-ReLU networks has no bad local minima, since all critical points are global minima. This is in sharp contrast with the loss function associated with learning ReLU networks investigated here, which generally involves infinitely many bad local minima! On the other hand, the bound in (12) scales inversely proportional with the quadratic ‘leaky factor’ of leaky ReLUs. This motivates having , which corresponds to letting the leaky ReLU approach the ReLU. In such a case, (12) would yield a worst-case bound of infinity for learning ReLU networks, corroborating the challenge and impossibility of learning ReLU networks by ‘plain-vanilla’ SGD. Indeed, the gap between in Theorem 1 and the bound in (12) is the price for being able to escape local minima and saddle points paid by our noise-injected SGD Algorithm 1. Last but not least, Theorem 1 also suggests that for a given network and a fixed step size , Algorithm 1 with works well too.
We briefly present the main ideas behind the proof of Theorem 1 next, but delegate the technical details to Appendix Appendix A.1. Our proof mainly builds upon the convergence proof of the classical Perceptron algorithm (see e.g., [41, Thm. 9.1]), and it is also inspired by that of [5, Thm. 1]. Nonetheless, the novel approach of performing SGD with principled noise injection into the ReLU activity indicator functions distinguishes itself from previous efforts. Since we are mainly interested in the (maximum) number of non-zero updates to be performed until convergence, we will assume for notational convenience that all iterations in (10) of Algorithm 1 perform a non-zero update. This assumption is made without loss of generality. To see this, since after the algorithm converges, one can always re-count the number of effective iterations that correspond to a non-zero update and re-number them by
Our main idea is to demonstrate that every single non-zero update of the form (10) in Algorithm 1 makes a non-negligible progress in bringing the current iterate toward some global optimum of , constructed based on the linear classifier weight vector . Specifically, as in the convergence proof of the Perceptron algorithm, we will establish separately a lower bound on the term , which is the so-termed Frobenius inner product, performing a component-wise inner product of two same-size matrices as though they are vectors; and, an upper bound on the norms and . Both bounds will be carefully expressed as functions of the number of performed non-zero updates. Recalling the Cauchy-Schwartz inequality , the lower bound of cannot grow larger than the upper bound on . Since every non-zero update brings the lower and upper bounds closer by a non-negligible amount, the worst case (in terms of the number of non-zero updates) is to have the two bounds equal at convergence, i.e., . To arrive at this equality, we are able to deduce an upper bound (due to a series of inequalities used in the proof to produce a relatively clean bound) on the number of non-zero updates by solving a univariate quadratic equality.
It will become clear in the proof that injecting random noise into just a subset of (rather than all) ReLU activity indicator functions enables us to leverage two key inequalities, namely, and for all data samples . These inequalities uniquely correspond to whether an update is non-zero or not. In turn, this characterization is indeed the key to establishing the desired lower and upper bounds for the two quantities on the two sides of the Cauchy-Schwartz inequality, a critical ingredient of our convergence analysis.
4.3 Lower bound
Besides the worst-case upper bound given in Theorem 1, we also provide a lower bound on the number of non-zero updates required by Algorithm 1 for convergence, which is summarized in the following theorem. The proof is provided in Appendix Appendix C.3.
Theorem 2 (Lower bound).
The lower bound on the number of non-zero updates to be performed in Theorem 2 matches that for learning single-hidden-layer leaky-ReLU networks initialized from zero [5, Thm. 4]. On the other hand, it is also clear that the worst-case bound established in Theorem 1 is (significantly) loose than the lower bound here. The gap between the two bounds (in learning ReLU versus leaky ReLU networks) is indeed the price we pay for escaping bad local minima and saddle points through our noise-injected SGD approach.
In this section, we investigate the generalization performance of training (possibly over-parameterized) ReLU networks using Algorithm 1 with randomly perturbed ReLU activity indicator functions. Toward this objective, we will rely on compression generalization bounds, specifically for the classification error as in (3) .
Recall that our ReLU network has hidden units, and a fixed second-layer weight . Stressing the number of ReLUs in the subscript, let denote the classifier obtained by training the network over training set using Algorithm 1 with initialization having rows obeying . Let also denote the set of all classifiers obtained using any and any , not necessarily those employed by Algorithm 1.
Suppose now that Algorithm 1 has converged after non-zero updates, as per Theorem 1. And let be the -tuple of training data from randomly picked by SGD iterations of Algorithm 1. To exemplify the -tuple used per realization of Algorithm 1, we write . Since can be smaller than , function and thus rely on compressed (down to size ) versions of the -tuples comprising the set [41, Definition 30.4]. Let be the subset of training data not picked by SGD to yield ; and correspondingly, let denote the ensemble risk associated with , and the empirical risk associated with the complement training set, namely . With these notational conventions, our next result follows from [41, Thm. 30.2].
Theorem 3 (Compression bound).
If , then the following inequality holds with probability of at least over the choice of and
Regarding Theorem 3, two observations are in order. The bound in (13) is non-asymptotic but as , the last two terms on the right-hand-side vanish, implying that the ensemble risk is upper bounded by the empirical risk . Moreover, once the SGD iterations in Algorithm 1 converge, we can find the complement training set , and thus can be determined. After recalling that holds at a global optimum of by Theorem 1, we obtain from Theorems 1 and 3 the following corollary.
If , and all rows of the initialization satisfy , then the following holds with probability at least over the choice of
where is given in Theorem 1.
Expressed differently, the bound in (14) suggests that in order to guarantee a low generalization error, one requires in the worst case about training data to reliably learn a two-layer ReLU network of hidden neurons. This holds true despite the fact that Algorithm 1 can achieve a zero training loss regardless of the training size . One implication of Corollary 1 is a fundamental difference in the sample complexity for generalization between training a ReLU network (at least in the worst case), versus training a -leaky ReLU network (), which at most needs data to be trained via SGD-type algorithms.
6 Numerical Tests
To validate our theoretical results, this section evaluates the empirical performance of Algorithm 1 using both synthetic data and real data. To benchmark Algorithm 1, we also simulated the plain-vanilla SGD. To compare between the two algorithms as fair as possible, the same initialization , constant step size , and data random sampling scheme were employed. For reproducibility, the Matlab code of Algorithm 1 is publicly available at https://gangwg.github.io/RELUS/.
6.1 Synthetic data
We consider first two synthetic tests using data generated from Gaussian as well as uniform distributions. In the first test, feature vectorswere sampled i.i.d. from a standardized Gaussian distribution , and classifier was drawn from . Labels were generated according to . To further yield for all , we normalized by the smallest number among . We performed independent experiments with , and over a varying set of training samples using ReLU networks comprising hidden neurons. The second layer weight vector was kept fixed with the first entries being and the remaining being . For fixed and , each experiment used a random initialization generated from , step size , and noise variance , along with a maximum of effective data passes.
Figure 2 depicts our results, where we display success rates of the plain-vanilla SGD (top panel) and our noise-injected SGD in Algorithm 1 (bottom panel); each plot presents results obtained from the experiments. Within each plot, a white square signifies that of the trials were successful, meaning that the learned ReLU network yields a training loss , while black squares indicate success rates. It is evident that the developed Algorithm 1 trained all considered ReLU networks to global optimality, while plain-vanilla SGD can get stuck with bad local minima, for small in particular. The bottom panel confirms that Algorithm 1 achieves optimal learning of single-hidden-layer ReLU networks on separable data, regardless of the network size, the number of training samples, and the initialization. The top panel however, suggests that learning ReLU networks becomes easier with plain-vanilla SGD as grows larger, namely as the network becomes ‘more over-parameterized.’
We repeated the first test using synthetic data as well as classifier generated i.i.d. from the uniform distribution . All other settings were kept the same. Success rates of plain-vanilla SGD are plotted in Fig. 3 (left panel), while those of the proposed Algorithm 1 are omitted, as they are successful in all simulated tests.
6.2 Real data
Performance of Algorithm 1 for training (over-)parameterized ReLU networks is further corroborated using two real datasets: iris in UCI’s machine learning repository , and MNIST images 222Downloaded from http://yann.lecun.com/exdb/mnist/. . The iris dataset contains four-dimensional feature vectors belonging to three classes. To obtain a two-class linearly separable dataset, the first-class data vectors were relabeled , while the remaining were relabeled . We performed independent experiments over a varying set of training samples using ReLU networks with hidden neurons. Gaussian initialization from , step size , noise variance , and a maximum of effective data passes were simulated. Success rates of plain-vanilla SGD are given in Fig. 3 (right). Again, Algorithm 1 achieves a success rate in all simulated settings.
The linearly separable MNIST dataset collects images of digits (labeled ) and (labeled ), each having dimension . We performed independent experiments over a varying set of training samples using ReLU networks with hidden neurons. The constant step size of both plain-vanilla SGD and Algorithm 1 was set to () when the ReLU networks have () hidden units, while the noise variance in Algorithm 1 was set to . Similar to the first experiment on randomly generated data, we plot success rates of the plain-vanilla SGD (top panel) and our noise-injected SGD (bottom panel) algorithms over training sets of MNIST images in Figure 4. It is self-evident that Algorithm 1 achieved a success rate under all testing conditions, which confirms our theoretical results in Theorem 1, and it markedly improves upon its plain-vanilla SGD alternative.
This paper approached the task of training ReLU networks from a non-convex optimization point of view. Focusing on the task of binary classification with a hinge loss criterion, this contribution put forth the first algorithm that can provably yet efficiently train any single-hidden-layer ReLU network to global optimality, provided that the data are linearly separable. The algorithm is as simple as plain-vanilla SGD, but it is able to exploit the power of random additive noise to break ‘optimality’ of the SGD learning process at any sub-optimal critical point. We established an upper and a lower bound on the number of non-zero updates that the novel algorithm requires for convergence to a global optimum. Our result holds regardless of the underlying data distribution, network/training size, or initialization. We further developed generalization error bounds for two-layer NN classifiers with ReLU activations, which provide the first theoretical guarantee for the generalization behavior of ReLU networks trained with SGD. A comparison of such bounds with those of a leaky ReLU network reveals a key difference between optimally learning a ReLU network versus that of a leaky ReLU network in the sample complexity required for generalization.
Since analysis, comparisons, and corroborating tests focus on single-hidden-layer networks with a hinge loss criterion here, our future work will naturally aim at generalizing the novel noise-injection design to SGD for multilayer ReLU networks, and considering alternative loss functions, and generalizations to (multi-)kernel based approaches.
G. An, “The effects of adding noise during backpropagation training on a generalization performance,”Neural Comput., vol. 8, no. 3, pp. 643–674, Apr. 1996.
-  P. Auer, M. Herbster, and M. K. Warmuth, “Exponentially many local minima for single neurons,” in Adv. in Neural Inf. Process. Syst., Denver, Colorado, Nov. 27–Dec. 2, 1995, pp. 316–322.
-  A. Blum and R. L. Rivest, “Training a 3-node neural network is NP-complete,” in Adv. in Neural Inf. Process. Syst., Cambridge, Massachusetts, Aug. 3–5, 1988, pp. 494–501.
-  A. Brutzkus and A. Globerson, “Globally optimal gradient descent for a ConvNet with Gaussian inputs,” in Intl. Conf. on Mach. Learn., vol. 70, Sydney, Australia, Aug. 6–11, 2017.
-  A. Brutzkus, A. Globerson, E. Malach, and S. Shalev-Shwartz, “SGD learns over-parameterized networks that provably generalize on linearly separable data,” in Intl. Conf. on Learn. Rep., Vancouver, BC, Canada, Apr. 30–May 3, 2018.
-  F. H. Clarke, Optimization and Nonsmooth Analysis. SIAM, 1990, vol. 5.
-  D. Dheeru and E. Karra Taniskidou, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml.
-  S. S. Du, J. D. Lee, Y. Tian, B. Poczos, and A. Singh, “Gradient descent learns one-hidden-layer CNN: Don’t be afraid of spurious local minima,” in Intl. Conf. on Mach. Learn., vol. 80, Stockholm, Sweden, July 10–15, 2018, pp. 1338–1347.
-  G. K. Dziugaite and D. M. Roy, “Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data,” arXiv:1703.11008, 2017.
-  H. Fu, Y. Chi, and Y. Liang, “Guaranteed recovery of one-hidden-layer neural networks via cross entropy,” arXiv:1802.06463, 2018.
R. Ge, F. Huang, C. Jin, and Y. Yuan, “Escaping from saddle points — Online stochastic gradient for tensor decomposition,” inConf. on Learn. Theory, vol. 40, Paris, France, July 3–6, 2015, pp. 797–842.
-  X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Intl. Conf. on Artif. Intell. and Stat., Sardinia, Italy, May 13–15, 2010, pp. 249–256.
-  I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge: MIT press, 2016, vol. 1.
-  M. Gori and A. Tesi, “On the problem of local minima in backpropagation,” IEEE Trans. Pattern Anal. Mach. Intell., no. 1, pp. 76–86, Jan. 1992.
-  L. Holmstrom and P. Koistinen, “Using additive noise in back-propagation training,” IEEE Trans. Neural Netw., vol. 3, no. 1, pp. 24–38, Jan. 1992.
-  G. Jagatap and C. Hedge, “Learning ReLU networks via alternating minimization,” arXiv:1806.07863, 2018.
-  S. M. M. Kalan, M. Soltanolkotabi, and A. S. Avestimehr, “Fitting ReLUs via SGD and Quantized SGD,” arXiv:1901.06587, 2019.
-  K. Kawaguchi, “Deep learning without poor local minima,” in Adv. in Neural Inf. Process. Syst., Barcelona, Spain, Dec. 5–10, 2016, pp. 586–594.
-  K. Kawaguchi and L. P. Kaelbling, “Elimination of all bad local minima in deep learning,” arXiv:1901.00279, 2019.
-  K. Kawaguchi, L. P. Kaelbling, and Y. Bengio, “Generalization in deep learning,” arXiv:1710.05468, 2017.
-  T. Laurent and J. von Brecht, “Deep linear neural networks with arbitrary loss: All local minima are global,” in Intl. Conf. on Mach. Learn., Stockholm, Sweden, July 10–15, 2018, pp. 2908–2913.
-  ——, “The multilinear structure of ReLU networks,” in Intl. Conf. on Mach. Learn., vol. 80, Stockholm, Sweden, July 10–15, 2018.
-  D. Li, T. Ding, and R. Sun, “Over-parameterized deep neural networks have no strict local minima for any continuous activations,” arXiv:1812.11039, 2018.
-  Y. Li and Y. Liang, “Learning overparameterized neural networks via stochastic gradient descent on structured data,” arXiv:1808.01204, 2018.
-  S. Liang, R. Sun, J. D. Lee, and R. Srikant, “Adding one neuron can eliminate all bad local minima,” arXiv:1805.08671, 2018.
-  N. Littlestone and M. Warmuth, “Relating data compression and learnability,” University of California, Santa Cruz, Tech. Rep., 1986.
-  S. Lu, M. Hong, and Z. Wang, “On the sublinear convergence of randomly perturbed alternating gradient descent to second order stationary solutions,” arXiv:1802.10418, 2018.
-  M. Nacson, N. Srebro, and D. Soudry, “Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate,” arXiv:1806.01796, 2018.
V. Nair and G. E. Hinton, “Rectified linear units improve restricted Boltzmann machines,” inIntl. Conf. on Mach. Learn., Haifa, Israel, June 21–24, 2010, pp. 807–814.
-  B. Neyshabur, R. Tomioka, and N. Srebro, “In search of the real inductive bias: On the role of implicit regularization in deep learning,” arXiv:1412.6614, 2014.
-  Q. Nguyen, “On connected sublevel sets in deep learning,” arXiv:1901.07417, 2019.
-  Q. Nguyen and M. Hein, “Optimization landscape and expressivity of deep CNNs,” in Intl. Conf. Mach. Learn., 2018, pp. 3727–3736.
-  A. B. Novikoff, “On convergence proofs for perceptrons,” in Proc. Symp. Math. Theory Automata, vol. 12, 1963, pp. 615–622.
-  S. Oymak, “Stochastic gradient descent learns state equations with nonlinear activations,” arXiv:1809.03019, 2018.
-  S. Oymak and M. Soltanolkotabi, “Towards moderate overparameterization: Global convergence guarantees for training shallow neural networks,” arXiv:1902.04674, 2019.
-  B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli, “Exponential expressivity in deep neural networks through transient chaos,” in Adv. in Neural Inf. Process. Syst., Barcelona, Spain, Dec. 5-10, 2016, pp. 3360–3368.
-  M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. Sohl-Dickstein, “On the expressive power of deep neural networks,” in Intl. Conf. on Mach. Learn., vol. 70, Sydney, Australia, Aug. 6–11, 2017, pp. 2847–2854.
-  F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychol. Rev., vol. 65, no. 6, p. 386, Nov. 1958.
-  I. Safran and O. Shamir, “Spurious local minima are common in two-layer ReLU neural networks,” in Intl. Conf. on Mach. Learn., vol. 80, Stockholm, Sweden, July 10–15, 2018, pp. 4430–4438.
-  S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning: From Theory to Algorithms. New York, NY: Cambridge University Press, 2014.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
-  M. Soltanolkotabi, “Learning ReLU via gradient descent,” in Adv. in Neural Inf. Process. Syst., Long Beach, CA, Dec. 4–9, 2017, pp. 2007–2017.
-  M. Soltanolkotabi, A. Javanmard, and J. D. Lee, “Theoretical insights into the optimization landscape of over-parameterized shallow neural networks,” arXiv:1707.04926, 2017.
-  D. Soudry, E. Hoffer, M. S. Nacson, S. Gunasekar, and N. Srebro, “The implicit bias of gradient descent on separable data,” J. Mach. Learn. Res., vol. 19, no. 70, pp. 1–57, 2018.
-  C. Wang and J. C. Principe, “Training neural networks with additive noise in the desired signal,” IEEE Trans. Neural Netw., vol. 10, no. 6, pp. 1511–1517, Nov. 1999.
-  G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory, vol. 64, no. 2, pp. 773–794, Feb. 2018.
-  T. Xu, Y. Zhou, K. Ji, and Y. Liang, “Convergence of SGD in learning ReLU models with separable data,” arXiv:1806.04339, 2018.
-  G. Yehudai and O. Shamir, “On the power and limitations of random features for understanding neural networks,” arXiv:1904.00687, 2019.
-  C. Yun, S. Sra, and A. Jadbabaie, “A critical view of global optimality in deep learning,” arXiv:1802.03487, 2018.
-  ——, “Efficiently testing local optimality and escaping saddles for ReLU networks,” arXiv:1809.10858, 2018.
-  ——, “Small nonlinearities in activation functions create bad local minima in neural networks,” arXiv:1802.03487, 2018.
-  L. Zhang, G. Wang, and G. B. Giannakis, “Real-time power system state estimation and forecasting via deep neural networks,” arXiv:1811.06146, Nov. 2018.
-  X. Zhang, Y. Yu, L. Wang, and Q. Gu, “Learning one-hidden-layer ReLU networks via gradient descent,” arXiv:1806.07808, 2018.
-  K. Zhong, Z. Song, P. Jain, P. L. Bartlett, and I. S. Dhillon, “Recovery guarantees for one-hidden-layer neural networks,” in Intl. Conf. on Mach. Learn., vol. 70, Sydney, Australia, Aug. 6–11, 2017, pp. 4140–4149.
-  Y. Zhou and Y. Liang, “Critical points of linear neural networks: Analytical forms and landscape properties,” in Intl. Conf. on Learn. Rep., Vancouver, BC, Canada, Apr. 30-May 3, 2018.