On the Second-order Convergence Properties of Random Search Methods

10/25/2021
by   Aurelien Lucchi, et al.
0

We study the theoretical convergence properties of random-search methods when optimizing non-convex objective functions without having access to derivatives. We prove that standard random-search methods that do not rely on second-order information converge to a second-order stationary point. However, they suffer from an exponential complexity in terms of the input dimension of the problem. In order to address this issue, we propose a novel variant of random search that exploits negative curvature by only relying on function evaluations. We prove that this approach converges to a second-order stationary point at a much faster rate than vanilla methods: namely, the complexity in terms of the number of function evaluations is only linear in the problem dimension. We test our algorithm empirically and find good agreements with our theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

We design an algorithm which finds an ϵ-approximate stationary point (wi...
research
10/04/2022

Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients

We consider escaping saddle points of nonconvex problems where only the ...
research
11/03/2017

First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time

Two classes of methods have been proposed for escaping from saddle point...
research
06/06/2020

SONIA: A Symmetric Blockwise Truncated Optimization Algorithm

This work presents a new algorithm for empirical risk minimization. The ...
research
03/30/2021

Quadratic and Cubic Regularisation Methods with Inexact function and Random Derivatives for Finite-Sum Minimisation

This paper focuses on regularisation methods using models up to the thir...
research
07/09/2019

SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems

This paper proposes low-complexity algorithms for finding approximate se...
research
02/18/2016

Efficient approaches for escaping higher order saddle points in non-convex optimization

Local search heuristics for non-convex optimizations are popular in appl...

Please sign up or login with your details

Forgot password? Click here to reset