DeepAI AI Chat
Log In Sign Up

First Order Methods take Exponential Time to Converge to Global Minimizers of Non-Convex Functions

by   Krishna Reddy Kesari, et al.

Machine learning algorithms typically perform optimization over a class of non-convex functions. In this work, we provide bounds on the fundamental hardness of identifying the global minimizer of a non convex function. Specifically, we design a family of parametrized non-convex functions and employ statistical lower bounds for parameter estimation. We show that the parameter estimation problem is equivalent to the problem of function identification in the given family. We then claim that non convex optimization is at least as hard as function identification. Jointly, we prove that any first order method can take exponential time to converge to a global minimizer.


page 1

page 2

page 3

page 4


Global Non-convex Optimization with Discretized Diffusions

An Euler discretization of the Langevin diffusion is known to converge t...

Relaxed Sparse Eigenvalue Conditions for Sparse Estimation via Non-convex Regularized Regression

Non-convex regularizers usually improve the performance of sparse estima...

Convexification of Neural Graph

Traditionally, most complex intelligence architectures are extremely non...

Sinusoidal Frequency Estimation by Gradient Descent

Sinusoidal parameter estimation is a fundamental task in applications fr...

Fast Global Convergence via Landscape of Empirical Loss

While optimizing convex objective (loss) functions has been a powerhouse...

Modeling Discrete Interventional Data using Directed Cyclic Graphical Models

We outline a representation for discrete multivariate distributions in t...

Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter

Given a nonconvex function f(x) that is an average of n smooth functions...