DeepAI AI Chat
Log In Sign Up

First Order Methods take Exponential Time to Converge to Global Minimizers of Non-Convex Functions

02/28/2020
by   Krishna Reddy Kesari, et al.
17

Machine learning algorithms typically perform optimization over a class of non-convex functions. In this work, we provide bounds on the fundamental hardness of identifying the global minimizer of a non convex function. Specifically, we design a family of parametrized non-convex functions and employ statistical lower bounds for parameter estimation. We show that the parameter estimation problem is equivalent to the problem of function identification in the given family. We then claim that non convex optimization is at least as hard as function identification. Jointly, we prove that any first order method can take exponential time to converge to a global minimizer.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/29/2018

Global Non-convex Optimization with Discretized Diffusions

An Euler discretization of the Langevin diffusion is known to converge t...
06/14/2013

Relaxed Sparse Eigenvalue Conditions for Sparse Estimation via Non-convex Regularized Regression

Non-convex regularizers usually improve the performance of sparse estima...
01/09/2018

Convexification of Neural Graph

Traditionally, most complex intelligence architectures are extremely non...
10/26/2022

Sinusoidal Frequency Estimation by Gradient Descent

Sinusoidal parameter estimation is a fundamental task in applications fr...
02/13/2018

Fast Global Convergence via Landscape of Empirical Loss

While optimizing convex objective (loss) functions has been a powerhouse...
05/09/2012

Modeling Discrete Interventional Data using Directed Cyclic Graphical Models

We outline a representation for discrete multivariate distributions in t...
02/02/2017

Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter

Given a nonconvex function f(x) that is an average of n smooth functions...