Uniform Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

10/17/2018
by   Damek Davis, et al.
0

We investigate the stochastic optimization problem of minimizing population risk, where the loss defining the risk is assumed to be weakly convex. Compositions of Lipschitz convex functions with smooth maps are the primary examples of such losses. We analyze the estimation quality of such nonsmooth and nonconvex problems by their sample average approximations. Our main results establish dimension-dependent rates on subgradient estimation in full generality and dimension-independent rates when the loss is a generalized linear model. As an application of the developed techniques, we analyze the nonsmooth landscape of a robust nonlinear regression problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset