Understanding Global Loss Landscape of One-hidden-layer ReLU Neural Networks

02/12/2020
by   Bo Liu, et al.
0

For one-hidden-layer ReLU networks, we show that all local minima are global in each differentiable region, and these local minima can be unique or continuous, depending on data, activation pattern of hidden neurons and network size. We give criteria to identify whether local minima lie inside their defining regions, and if so (we call them genuine differentiable local minima), their locations and loss values. Furthermore, we give necessary and sufficient conditions for the existence of saddle points as well as non-differentiable local minima. Finally, we compute the probability of getting stuck in genuine local minima for Gaussian input data and parallel weight vectors, and show that it is exponentially vanishing when the weights are located in regions where data are not too scarce. This may give a hint to the question why gradient-based local search methods usually do not get trapped in local minima when training deep ReLU neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset