1 Introduction
Local search algorithms like stochastic gradient descent
[4] or variants have gained huge success in training deep neural networks (see, [5]; [6]; [7], for example). Despite the spurious saddle points and local minima on the loss surface [3], it has been widely conjectured that all local minima of the empirical loss lead to similar training performance [1, 2]. For example, [8] empirically showed that neural networks with identical architectures but different initialization points can converge to local minima with similar classification performance. However, it still remains a challenge to characterize the theoretical properties of the loss surface for neural networks.In the setting of regression problems, theoretical justifications has been established to support the conjecture that all local minima lead to similar training performance. For shallow models, [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] provide conditions under which the local search algorithms are guaranteed to converge to the globally optimal solution for the regression problem. For deep linear networks, it has been shown that every local minimum of the empirical loss is a global minimum [21, 22, 23, 24, 25]. In order to characterize the loss surface of more general deep networks for regression tasks, [2] have proposed an interesting approach. Based on certain constructions on network models and additional assumptions, they relate the loss function to a spin glass model and show that the almost all local minima have similar empirical loss and the number of bad local minima decreases quickly with the distance to the global optimum. Despite the interesting results, it remains a concern to properly justify their assumptions. More recently, it has been shown [26, 27] that, when the dataset satisfies certain conditions, if one layer in the multilayer network has more neurons than the number of training samples, then a subset of local minima are global minima.
Although the loss surfaces in regression tasks have been well studied, the theoretical understanding of loss surfaces in classification tasks is still limited. [27, 28, 29] treat the classification problem as the regression problem by using quadratic loss, and show that (almost) all local minima are global minima. However, the global minimum of the quadratic loss does not necessarily have zero misclassification error even in the simplest cases (e.g., every global minimum of quadratic loss can have nonzero misclassification error even when the dataset is linearly separable and the network is a linear network). This issue was mentioned in [26] and a different loss function was used, but their result only studied the linearly separable case and a subset of the critical points.
In view of the prior work, the context and contributions of our paper are as follows:

Prior work on quadratic and related loss functions suggest that one can achieve zero misclassification error at all local minima by overparameterizing the neural network. The reason for overparameterization is that the quadratic loss function tries to match the output of the neural network to the label of each training sample.

On the other hand, hinge losstype functions only try to match the sign of the outputs with the labels. So it may be possible to achieve zero misclassification error without overparametrization. We provide conditions under which the misclassification error of neural networks is zero at all local minima for hingeloss functions.

Our conditions are roughly in the following form: the neurons have to be increasing and strictly convex, the neural network should either be singlelayered or is multilayered with a shortcutlike connection and the surrogate loss function should be a smooth version of the hinge loss function.

We also provide counterexamples to show that when these conditions are relaxed, the result may not hold.

We establish our results under the assumption that either the dataset is linearly separable or the positively and negatively labeled samples are located on different subspaces. Whether this assumption is necessary is an open problem, except in the case of certain special neurons.
2 Preliminaries
Network models.
Given an input vector
of dimension , we consider a neural network with layers for binary classification. We denote by the number of neurons on the th layer (note that and). We denote the neuron activation function by
. Let denote the weight matrix connecting the th layer and the th layer anddenote the bias vector for the neurons in the
th layer. Therefore, the output of the network can be expressed bywhere denotes all parameters in the neural network.
Data distribution. In this paper, we consider binary classification tasks where each sample is drawn from an underlying data distribution defined on . The sample is considered positive if , and negative otherwise. Let denote a set of orthonormal basis on the space . Let and denote two subsets of such that all positive and negative samples are located on the linear span of the set and , respectively, i.e., and . Let denote the size of the set , denote the size of the set and denote the size of the set , respectively.
Loss and error. Let denote a dataset with samples, each independently drawn from the distribution . Given a neural network parameterized by and a loss function in binary classification tasks^{3}^{3}3We note that, in regression tasks, the empirical loss is usually defined as ., we define the empirical loss as the average loss of the network on a sample in the dataset , i.e.,
Furthermore, for a neural network
, we define a binary classifier
of the form , where the sign function , if , and otherwise. We define the training error (also called the misclassification error) as the misclassification rate of the neural network on the dataset , i.e.,where is the indicator function. The training error measures the classification performance of the network on the finite samples in the dataset .
3 Main Results
In this section, we present the main results. We first introduce several important conditions in order to derive the main results, and we will provide further discussions on these conditions in the next section.
3.1 Conditions
To fully specify the problem, we need to specify our assumptions on several components of the model, including: (1) the loss function, (2) the data distribution, (3) the network architecture and (4) the neuron activation function.
Assumption 1 (Loss function)
Let denote a loss function satisfying the following conditions: (1) is a surrogate loss function, i.e., for all , where denotes the indicator function; (2) has continuous derivatives up to order on ; (3) is nondecreasing (i.e., for all ) and there exists a positive constant such that iff .
The first condition in Assumption 1 ensures that the training error is always upper bounded by the empirical loss , i.e., . This guarantees that the neural network can correctly classify all samples in the dataset (i.e., ), when the neural network achieves zero empirical loss (i.e., ). The second condition ensures that the empirical loss has continuous derivatives with respect to the parameters up to a sufficiently high order. The third condition ensures that the loss function is nondecreasing and is achievable if and only if . Here, we provide a simple example of the loss function satisfying all conditions in Assumption 1: the polynomial hinge loss, i.e., . We note that, in this paper, we use to denote the empirical loss when the loss function is and the network is parametrized by a set of parameters . Further results on the impact of loss functions are presented in Section 4.
Assumption 2 (Data distribution)
Assume that for random vectors independently drawn from the distribution and independently drawn from the distribution , matrices and
are full rank matrices with probability one.
Assumption 2 states that support of the conditional distribution is sufficiently rich so that samples drawn from it will be linearly independent. In other words, by stating this assumption, we are avoiding trivial cases where all the positively labeled points are located in a very small subset of the linear span of Similarly for the negatively labeled samples.
Assumption 3 (Data distribution)
Assume , i.e., .
Assumption 3 assumes that the positive and negative samples are not located on the same linear subspace. Previous works [30, 31, 32, 30] have observed that some classes of natural images (e.g., images of faces, handwritten digits, etc) can be reconstructed from lowerdimensional representations. For example, using dimensionality reduction methods such as PCA, one can approximately reconstruct the original image from only a small number of principal components [30, 31]. Here, Assumption 3 states that both the positively and negatively labeled samples have lowerdimensional representations, and they do not exist in the same lowerdimensional subspace. We provide additional analysis in Section 4, showing how our main results generalize to other data distributions.
Assumption 4 (Network architecture)
Assume that the neural network is a singlelayered neural network, or more generally, has shortcutlike connections shown in Fig 1 (b), where is a single layer network and is a feedforward network.
Shortcut connections are widely used in the modern network architectures (e.g., Highway Networks [34], ResNet [33], DenseNet [35], etc.), where the skip connections allow the deep layers to have direct access to the outputs of shallow layers. For instance, in the residual network, each residual block has a identity shortcut connection, shown in Fig 1 (a), where the output of each residual block is the vector sum of its input and the output of a network .
Instead of using the identity shortcut connection, in this paper, we first pass the input through a single layer network , where vector denotes the weight vector, matrix denotes the weight matrix and vector denotes the vector containing all parameters in . We next add the output of this network to a network and use the addition as the output of the whole network, i.e., where vector and denote the vector containing all parameters in the network and the whole network , respectively. We note here that, in this paper, we do not restrict the number of layers and neurons in the network and this means that the network can be a feedforward network introduced in Section 2 or a single layer network or even a constant. In fact, when the network is a single layer network or a constant, the whole network becomes a single layer network. Furthermore, we note that, in Section 4, we will show that if we remove this connection or replace this shortcutlike connection with the identity shortcut connection, the main result does not hold.
Assumption 5 (Neuron activation)
Assume that neurons in the network are real analytic and satisfy for all . Assume that neurons in the network are real functions on .
In Assumption 5, we assume that neurons in the network are infinitely differentiable and have positive second order derivatives on , while neurons in the network are real functions. We make the above assumptions to ensure that the loss function is partially differentiable w.r.t. the parameters in the network up to a sufficiently high order and allow us to use Taylor expansion in the analysis. Here, we list a few neurons which can be used in the network : softplus neuron, i.e., , quadratic neuron, i.e, , etc. We note that neurons in the network and do not need to be of the same type and this means that a more general class of neurons can be used in the network , e.g., threshold neuron, i.e.,
, sigmoid neuron , etc. Further discussion on the effects of neurons on the main results are provided in Section 4.3.2 Main Results
Now we present the following theorem to show that when assumptions 15 are satisfied, every local minimum of the empirical loss function has zero training error if the number of neurons in the network are chosen appropriately.
Theorem 1 (Linear subspace data)
Remark: (i) By setting the network to a constant, it directly follows from Theorem 1 that if a single layer network consisting of neurons satisfying Assumption 5 and all other conditions in Theorem 1 are satisfied, then every local minimum of the empirical loss has zero training error. (ii) The positiveness of is guaranteed by Assumption 3. In the worst case (e.g., and ), the number of neurons needs to be at least greater than the number of samples, i.e., . However, when the two orthonormal basis sets and differ significantly (i.e., ), the number of neurons required by Theorem 1 can be significantly smaller than the number of samples (i.e., ). In fact, we can show that, when the neuron has quadratic activation function , the assumption can be further relaxed such that the number of neurons is independent of the number of samples. We discuss this in the following proposition.
Proposition 1
Assume that assumptions 15 are satisfied. Assume that samples in the dataset are independently drawn from the distribution . Assume that neurons in the network satisfy and the number of neurons in the network satisfies . If is a local minimum of the loss function and , then holds with probability one.
Remark: Proposition 1 shows that if the number of neuron is greater than the dimension of the subspace, i.e., , then every local minimum of the empirical loss function has zero training error. We note here that although the result is stronger with quadratic neurons, it does not imply that the quadratic neuron has advantages over the other types of neurons (e.g., softplus neuron, etc). This is due to the fact that when the neuron has positive derivatives on , the result in Theorem 1 holds for the dataset where positive and negative samples are linearly separable. We provide the formal statement of this result in Theorem 2. However, when the neuron has quadratic activation function, the result in Theorem 1 may not hold for linearly separable dataset and we will illustrate this by providing a counterexample in the next section.
As shown in Theorem 1, when the data distribution satisfies Assumption 2 and 3, every local minimum of the empirical loss has zero training error. However, we can easily see that distributions satisfying these two assumptions may not be linearly separable. Therefore, to provide a complementary result to Theorem 1, we consider the case where the data distribution is linearly separable. Before presenting the result, we first present the following assumption on the data distribution.
Assumption 6 (Linear separability)
Assume that there exists a vector such that the data distribution satisfies .
In Theorem 2, we will show that when the samples drawn from the data distribution are linearly separable, and the network has a shortcutlike connection shown in Figure 1, all local minima of the empirical loss function have zero training errors if the type of the neuron in the network are chosen appropriately.
Theorem 2 (Linearly separable data)
Suppose that the loss function satisfies Assumption 1 and the network architecture satisfies Assumption 4. Assume that samples in the dataset are independently drawn from a distribution satisfying Assumption 6. Assume that the single layer network has neurons and neurons in the network are twice differentiable and satisfy for all . If is a local minimum of the loss function , , then holds with probability one.
Remark: Similar to Proposition 1, Theorem 2 does not require the number of neurons to be in scale with the number of samples. In fact, we make a weaker assumption here: the single layer network only needs to have at least one neuron, in contrast to at least neurons required by Proposition 1. Furthermore, we note here that, in Theorem 2, we assume that neurons in the network have positive derivatives on . This implies that Theorem 2 may not hold for a subset of neurons considered in Theorem 1 (e.g., quadratic neuron, etc). We will provide further discussions on the effects of neurons in the next section.
So far, we have provided results showing that under certain constraints on the (1) neuron activation function, (2) network architecture, (3) loss function and (4) data distribution, every local minimum of the empirical loss function has zero training error. In the next section, we will discuss the implications of these conditions on our main results.
4 Discussions
In this section, we discuss the effects of the (1) neuron activation, (2) shortcutlike connections, (3) loss function and (4) data distribution on the main results, respectively. We show that the result may not hold if these assumptions are relaxed.
4.1 Neuron Activations
To begin with, we discuss whether the results in Theorem 1 and 2 still hold if we vary the neuron activation function in the single layer network
. Specifically, we consider the following five classes of neurons: (1) softplus class, (2) rectified linear unit (ReLU) class, (3) leaky rectified linear unit (Leaky ReLU) class, (4) quadratic class and (5) sigmoid class. In the following, for each class of neurons, we show whether the main results hold and provide counterexamples if certain conditions in the main results are violated. We summarize our findings in Table
1. We visualize some neurons activation functions from these five classes in Fig. 2(a).Softplus class contains neurons with real analytic activation functions , where , for all . A widely used neuron in this class is the softplus neuron, i.e., , which is a smooth approximation of ReLU. We can see that neurons in this class satisfy assumptions in both Theorem 1 and 2 and this indicates that both theorems hold for the neurons in this class.
ReLU class contains neurons with for all and is piecewise continuous on . Some commonly adopted neurons in this class include: threshold units, i.e., , rectified linear units (ReLU), i.e., and rectified quadratic units (ReQU), i.e., . We can see that neurons in this class do not satisfy neither assumptions in Theorem 1 nor 2. In proposition 2, we show that when the single layer network consists of neurons in the ReLU class, even if all other conditions in Theorem 1 or 2 are satisfied, the empirical loss function can have a local minimum with nonzero training error.
Proposition 2
Suppose that assumptions 1 and 4 are satisfed. Assume that neurons in the network satisfy that for all and is piecewise continuous on . Then there exists a network architecture and a distribution satisfying assumptions in Theorem 1 or 2 such that with probability one, the empirical loss has a local minima satisfying , where and are the number of positive and negative samples, respectively.
Remark: (i) We note here that the above result holds in the overparametrized case, where the number of neurons in the network is larger than the number of samples in the dataset. In addition, all counterexamples shown in Section 4.1 hold in the overparametrized case. (ii) We note here that applying the same analysis, we can generalize the above result to a larger class of neurons satisfying the following condition: there exists a scalar such that constant for all and is piecewise continuous on . (iii) We note that the training error is strictly nonzero when the dataset has both positive and negative samples and this can happen with probability at least .
Theorem  Softplus  ReLU  LeakyReLU  Sigmoid  Quadratic 

1  Yes  No  No  No  Yes 
2  Yes  No  No  No  No 
LeakyReLU class contains neurons with for all and is piecewise continuous on . Some commonly used neurons in this class include ReLU, i.e., , leaky rectified linear unit (LeakyReLU), i.e., for , for and some constant , exponential linear unit (ELU), i.e., for , for and some constant . We can see that all neurons in this class do not satisfy assumptions in Theorem 1, while some neurons in this class satisfy the condition in Theorem 2 (e.g., linear neuron, ) and some neurons do not (e.g., ReLU). In Proposition 2, we have provided a counterexample showing that Theorem 2 does not hold for some neurons in this class (e.g., ReLU). Next, we will present the following proposition to show that when the network consists of neurons in the LeakyReLU class, even if all other conditions in Theorem 1 are satisfied, the empirical loss function is likely to have a local minimum with nonzero training error with high probability.
Proposition 3
Suppose that Assumption 1 and 4 are satisfied. Assume that neurons in the network satisfy that for all and is piecewise continuous on . Then there exists a network architecture and a distribution satisfying assumptions in Theorem 1 such that, with probability at least , the empirical loss has a local minima with nonzero training error.
Remark: We note that applying the same proof, we can generalize the above result to a larger class of neurons, i.e., neurons satisfying the condition that there exists two scalars and such that for all and is piecewise continuous on . In addition, we note that the ReLU neuron (but not all neurons in the ReLU class) satisfies the definition of both ReLU class and LeakyReLU class, and therefore both Proposition 2 and 3 hold for the ReLU neuron.
Sigmoid class contains neurons with constant on . We list a few commonly adopted neurons in this family: sigmoid neuron, i.e., , hyperbolic tangent neuron, i.e., , arctangent neuron, i.e., and softsign neuron, i.e.,
. We note that all real odd functions
^{4}^{4}4A real function is an odd function, if for all . satisfy the conditions of the sigmoid class. We can see that none of the above neurons satisfy assumptions in Theorem 1, since neurons in this class satisfy either for all or is not twice differentiable. For Theorem 2, we can see that some neurons in this class satisfy the condition in Theorem 2 (e.g., sigmoid neuron) and some neurons do not (e.g., constant neuron for all ). In Proposition 2, we provided a counterexample showing that Theorem 2 does not hold for some neurons in this class (e.g., constant neuron). Next, we present the following proposition showing that when the network consists of neurons in the sigmoid class, then there always exists a data distribution satisfying the assumptions in Theorem 1 such that, with a positive probability, the empirical loss has a local minima with nonzero training error.Proposition 4
Suppose that assumptions 1 and 4 are satisfed. Assume that there exists a constant such that neurons in the network satisfy for all . Assume that the dataset has samples. There exists a network architecture and a distribution satisfying assumptions in Theorem 1 such that, with a positive probability, the empirical loss function has a local minimum satisfying , where and denote the number of positive and negative samples in the dataset, respectively.
Remark: Proposition 4 shows that when the network consists of neurons in the sigmoid class, even if all other conditions are satisfied, the results in Theorem 1 does not hold with a positive probability.
Quadratic family contains neurons where is real analytic and strongly convex on and has a global minimum at the point . A simple example of neuron in this family is the quadratic neuron, i.e., . It is easy to check that all neurons in this class satisfy the conditions in Theorem 1 but not in Theorem 2. For Theorem 2, we present a counterexample and show that, when the network consists of neurons in the quadratic class, even if positive and negative samples are linearly separable, the empirical loss can have a local minimum with nonzero training error.
Proposition 5
Suppose that Assumption 1 and 4 are satisfied. Assume that neurons in satisfy that is strongly convex and twice differentiable on and has a global minimum at . There exists a network architecture and a distribution satisfying assumptions in Theorem 2 such that with probability one, the empirical loss has a local minima satisfying , where and denote the number of positive and negative samples in the dataset, respectively.
4.2 Shortcutlike Connections
In this subsection, we discuss whether the main results still hold if we remove the shortcutlike connections or replace them with the identity shortcut connections used in the residual network [33]. Specifically, we provide two counterexamples and show that the main results do not hold if the shortcutlike connections are removed or replaced with the identity shortcut connections.
Feedforward networks. When the shortcutlike connections (i.e., the network in Figure 1(b)) are removed, the network architecture can be viewed as a standard feedforward neural network. We provide a counterexample to show that, for a feedforward network with ReLU neurons, even if the other conditions in Theorem 1 or 2 are satisfied, the empirical loss functions is likely to have a local minimum with nonzero training error. In other words, neither Theorem 1 nor 2 holds when the shortcutlike connections are removed.
Proposition 6
Suppose that assumption 1 is satisfied. Assume that the feedforward network has at least one hidden layer and at least one neuron in each hidden layer. If neurons in the network satisfy that for all and is continuous on , then for any dataset with samples, the empirical loss has a local minima with , where and are the number of positive and negative samples in the dataset, respectively.
Remark: The result holds for ReLUs, since it is easy to check that the ReLU neuron satisfies the above assumptions.
Identity shortcut connections. As we stated earlier, adding shortcutlike connections to a network can improve the loss surface. However, the shortcutlike connections shown in Fig 1(b) are different from some popular shortcut connections used in the realworld applications, e.g., the identity shortcut connections in the residual network. Thus, a natural question arises: do the main results still hold if we use the identity shortcut connections? To address the question, we provide the following counterexample to show that, when we replace the shortcutlike connections with the identity shortcut connections, even if the other conditions in Theorem 1 are satisfied, the empirical loss function is likely to have a local minimum with nonzero training error. In other words, Theorem 1 does not hold for the identity shortcut connections.
Proposition 7
Assume that is a feedforward neural network parameterized by and all neurons in are ReLUs. Define a network with identity shortcut connections as , . Then there exists a distribution satisfying the assumptions in Theorem 1 such that with probability at least , the empirical loss has a local minimum with nonzero training error.
4.3 Loss Functions
In this subsection, we discuss whether the main results still hold if we change the loss function. We mainly focus on the following two types of surrogate loss functions: quadratic loss and logistic loss. We will show that if the loss function is replaced with the quadratic loss or logistic loss, then neither Theorem 1 nor 2 holds. In addition, we show that when the loss function is the logistic loss and the network is a feedforward neural network, there are no local minima with zero training error in the real parameter space. In Fig. 2(b), we visualize some surrogate loss functions discussed in this subsection.
Quadratic loss. The quadratic loss has been wellstudied in prior works. It has been shown that when the loss function is quadratic, under certain assumptions, all local minima of the empirical loss are global minima. However, the global minimum of the quadratic loss does not necessarily have zero misclassification error, even in the realizable case (i.e., the case where there exists a set of parameters such that the network achieves zero misclassification error on the dataset or the data distriubtion). To illustrate this, we provide a simple example where the network is a simplified linear network and the data distribution is linearly separable.
Example 1
Let the distribution satisfy that , and
is a uniform distribution on the interval
. For a linear model , every global minimum of the population loss satisfies .Remark: The proof of the above result in Appendix B.7 is very straightforward. We have only provided it there since we are unable to find a reference which explicitly states such a result, but we will not be surprised if this result has been known to others. This example shows that every global minimum of the quadratic loss has nonzero misclassification error, although the linear model is able to achieve zero misclassification error on this data distribution. Similarly, one can easily find datasets under which all global minima of the quadratic loss have nonzero training error.
In addition, we provide two examples in Appendix B.8 and show that, when the loss function is replaced with the quadratic loss, even if the other conditions in Theorem 1 or 2 are satisfied, every global minimum of the empirical loss has a training error larger than with a positive probability. In other words, our main results do hold for the quadratic loss.
The following observation may be of independent interest. Different from the quadratic loss, the loss functions conditioned in Assumption 1 have the following two properties: (i) the minimum empirical loss is zero if and only if there exists a set of parameters achieving zero training error; (ii) every global minimum of the empirical loss has zero training error in the realizable case.
Proposition 8
Let denote a feedforward network parameterized by and let the dataset have samples. When the loss function satisfies Assumption 1 and , we have if and only if . Furthermore, if , every global minimum of the empirical loss has zero training error, i.e., .
Remark: We note that the network does not need to be a feedforward network. In fact, the same results hold for a large class of network architectures, including both architectures shown in Fig 1. We provide additional analysis in Appendix B.9.
Logistic loss. The logistic loss is different from the loss functions conditioned in Assumption 1, since the logistic loss does not have a global minimum on . Here, for the logistic loss function, we show that even if the remaining assumptions in Theorem 1 hold, every critical point is a saddle point. In other words, Theorem 1 does not hold for logistic loss. Additional analysis on Theorem 2 are provided in Appendix B.11.
Proposition 9
Assume that the loss function is the logistic loss, i.e., . Assume that assumptions 25 are satisfied. Assume that samples in the dataset are independently drawn from the distribution . Assume that the number of neurons in the network satisfies , where . If denotes a critical point of the empirical loss , then is a saddle point. In particular, there are no local minima.
Remark: We note here that the result can be generalized to every loss function which is real analytic and has a positive derivative on .
Furthermore, we provide the following result to show that when the dataset contains both positive and negative samples, if the loss is the logistic loss, then every critical point of the empirical loss function has nonzero training error.
Proposition 10
Assume the dataset consists of both positive and negative samples. Assume that is a feedforward network parameterized by . Assume that the loss function is logistic, i.e., . If the real parameters denote a critical point of the empirical loss , then .
Remark: We provide the proof in Appendix B.12. The above proposition implies every critical point is either a local minimum with nonzero training error or is a saddle point (also with nonzero training error). We note here that, similar to Proposition 9, the result can be generalized to every loss function that is differentiable and has a positive derivative on .
4.4 Open Problem: Datasets
In this paper, we have mainly considered a class of nonlinearly separable distribution where positive and negative samples are located on different subspaces. We show that if the samples are drawn from such a distribution, under certain additional conditions, all local minima of the empirical loss have zero training errors. However, one may ask: how well does the result generalize to other nonlinearly separable distributions or datasets? Here, we partially answer this question by presenting the following necessary condition on the dataset so that Theorem 1 can hold.
Proposition 11
Remark: The proposition implies that when the dataset does not meet this necessary condition, there exists a feedforward architecture such that the empirical loss function has a local minimum with a nonzero training error. We use this implication to prove the counterexamples provided in Appendix B.14 when Assumption 2 or 3 on the dataset is not satisfied. Therefore, Theorem 1 no longer holds when Assumption 2 or 3 is removed. We note that the necessary condition shown here is not equivalent to Assumption 2 and 3. Now we present the following result to show the sufficient and necessary condition that the dataset should satisfy so that Proposition 1 can hold.
Proposition 12
Suppose that the loss function satisfies Assumption 1 and neurons in the network satisfy Assumption 5. Assume that the single layer network has neurons and assume that neurons in are quadratic neurons, i.e., . For any network architecture , every local minimum of the empirical loss function , satisfies if and only if the matrix is indefinite for all sequences satisfying .
Remark: (i) This sufficient and necessary condition implies that for any network architecture , there exists a set of parameters such that the network can correctly classify all samples in the dataset. This also indicates the existence of a set of parameters achieving zero training error, regardless of the network architecture of . We provide the proof in Appendix B.15. (ii) We note that Proposition 12 only holds for the quadratic neuron. The problem of finding the sufficient and necessary conditions for the other types of neurons is open.
5 Conclusions
In this paper, we studied the surface of a smooth version of the hinge loss function in binary classification problems. We provided conditions under which the neural network has zero misclassification error at all local minima and also provide counterexamples to show that when some of these assumptions are relaxed, the result may not hold. Further work involves exploiting our results to design efficient training algorithms classification tasks using neural networks.
References
 [1] Y. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 521(7553):436, 2015.
 [2] A. Choromanska, M. Henaff, M. Mathieu, G. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015.
 [3] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in highdimensional nonconvex optimization. In Advances in neural information processing systems, pages 2933–2941, 2014.

[4]
L. Bottou.
Largescale machine learning with stochastic gradient descent.
In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010.  [5] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
 [6] I. J Goodfellow, D. WardeFarley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013.
 [7] L. Wan, M. Zeiler, S. Zhang, Y. Le Cun, and R. Fergus. Regularization of neural networks using dropconnect. In ICML, pages 1058–1066, 2013.
 [8] Y. Li, J. Yosinski, J. Clune, H. Lipson, and J. Hopcroft. Convergent learning: Do different neural networks learn the same representations? arXiv preprint arXiv:1511.07543, 2015.
 [9] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning polynomials with neural networks. In ICML, 2014.
 [10] H. Sedghi and A. Anandkumar. Provable methods for training neural networks with sparse connectivity. arXiv preprint arXiv:1412.2693, 2014.
 [11] M. Janzamin, H. Sedghi, and A. Anandkumar. Beating the perils of nonconvexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015.
 [12] B. D Haeffele and R. Vidal. Global optimality in tensor factorization, deep learning, and beyond. arXiv preprint arXiv:1506.07540, 2015.
 [13] A. Gautier, Q. N. Nguyen, and M. Hein. Globally optimal training of generalized polynomial neural networks with nonlinear spectral methods. In Advances in Neural Information Processing Systems, pages 1687–1695, 2016.
 [14] A. Brutzkus and A. Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. arXiv preprint arXiv:1702.07966, 2017.
 [15] M. Soltanolkotabi. Learning relus via gradient descent. In NIPS, pages 2004–2014, 2017.
 [16] D. Soudry and E. Hoffer. Exponentially vanishing suboptimal local minima in multilayer neural networks. arXiv preprint arXiv:1702.05777, 2017.
 [17] S. Goel and A. Klivans. Learning depththree neural networks in polynomial time. arXiv preprint arXiv:1709.06010, 2017.
 [18] S. S. Du, J. D. Lee, and Y. Tian. When is a convolutional filter easy to learn? arXiv preprint arXiv:1709.06129, 2017.
 [19] K. Zhong, Z. Song, P. Jain, P. L Bartlett, and I. S Dhillon. Recovery guarantees for onehiddenlayer neural networks. arXiv preprint arXiv:1706.03175, 2017.
 [20] Y. Li and Y. Yuan. Convergence analysis of twolayer neural networks with relu activation. In NIPS, pages 597–607, 2017.

[21]
P. Baldi and K. Hornik.
Neural networks and principal component analysis: Learning from examples without local minima.
Neural networks, 2(1):53–58, 1989.  [22] K. Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, pages 586–594, 2016.
 [23] C D. Freeman and J. Bruna. Topology and geometry of halfrectified network optimization. ICLR, 2016.
 [24] M. Hardt and T. Ma. Identity matters in deep learning. ICLR, 2017.
 [25] C. Yun, S. Sra, and A. Jadbabaie. Global optimality conditions for deep neural networks. arXiv preprint arXiv:1707.02444, 2017.
 [26] Q. Nguyen and M. Hein. The loss surface and expressivity of deep convolutional neural networks. arXiv preprint arXiv:1710.10928, 2017.
 [27] Q. Nguyen and M. Hein. The loss surface of deep and wide neural networks. arXiv preprint arXiv:1704.08045, 2017.
 [28] D. Boob and G. Lan. Theoretical properties of the global optimizer of two layer neural network. arXiv preprint arXiv:1710.11241, 2017.
 [29] M. Soltanolkotabi, A. Javanmard, and J. D. Lee. Theoretical insights into the optimization landscape of overparameterized shallow neural networks. arXiv preprint arXiv:1707.04926, 2017.
 [30] P. N. Belhumeur, J. P Hespanha, and D. J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on pattern analysis and machine intelligence, 19(7):711–720, 1997.
 [31] C. Chennubhotla and A. Jepson. Sparse pca. extracting multiscale structure from data. In ICCV, volume 1, pages 641–647. IEEE, 2001.
 [32] T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. IEEE Transactions on pattern analysis and machine intelligence, 23(6):681–685, 2001.
 [33] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
 [34] R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
 [35] G Huang, Zhuang L., Kilian Q. W., and Laurens V. D. M. Densely connected convolutional networks. In CVPR, 2017.
Appendix A Additional Results in Section 3
a.1 Proof of Lemma 1
Lemma 1 (Necessary condition.)
Assume that neurons in the network are twice differentiable and the loss function has a continuous derivative on up to the third order. If and parameters denote a local minimum of the loss function , then for any ,
Proof.
We first recall some notations defined in the paper. The output of the neural network is
where is the single layer neural network parameterized by , i.e.,
and is a deep neural network parameterized by . The empirical loss function is given by
Since the loss function has a continuous derivative on up to the third order, neurons in the network are twice differentiable, then the gradient vector and the Hessian matrix exists. Furthermore, by the assumption that is a local minima of the loss function , then we should have for ,
(1) 
Now we need to prove that if is a local minima, then
We prove it by contradiction. Assume that there exists such that
Then by equation (1), we have . Now, we consider the following Hessian matrix . Since is a local minima of the loss function , then the matrix should be positive semidefinite at . By , we have
In addition, we have
Since the matrix is positive semidefinite, then for any and ,
Since
and by setting
then
Comments
There are no comments yet.