A Relaxation Argument for Optimization in Neural Networks and Non-Convex Compressed Sensing

02/03/2020
by   G. Welper, et al.
0

It has been observed in practical applications and in theoretical analysis that over-parametrization helps to find good minima in neural network training. Similarly, in this article we study widening and deepening neural networks by a relaxation argument so that the enlarged networks are rich enough to run r copies of parts of the original network in parallel, without necessarily achieving zero training error as in over-parametrized scenarios. The partial copies can be combined in r^θ possible ways for layer width θ. Therefore, the enlarged networks can potentially achieve the best training error of r^θ random initializations, but it is not immediately clear if this can be realized via gradient descent or similar training methods. The same construction can be applied to other optimization problems by introducing a similar layered structure. We apply this idea to non-convex compressed sensing, where we show that in some scenarios we can realize the r^θ times increased chance to obtain a global optimum by solving a convex optimization problem of dimension rθ.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro