Hölder Gradient Descent and Adaptive Regularization Methods in Banach Spaces for First-Order Points

04/06/2021
by   Serge Gratton, et al.
0

This paper considers optimization of smooth nonconvex functionals in smooth infinite dimensional spaces. A Hölder gradient descent algorithm is first proposed for finding approximate first-order points of regularized polynomial functionals. This method is then applied to analyze the evaluation complexity of an adaptive regularization method which searches for approximate first-order points of functionals with β-Hölder continuous derivatives. It is shown that finding an ϵ-approximate first-order point requires at most O(ϵ^-p+β/p+β-1) evaluations of the functional and its first p derivatives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2017

Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The unconstrained minimization of a sufficiently smooth objective functi...
research
11/09/2018

Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization

A regularization algorithm using inexact function values and inexact der...
research
08/12/2020

Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion

The paper considers distributed gradient flow (DGF) for multi-agent nonc...
research
11/09/2018

Deterministic and stochastic inexact regularization algorithms for nonconvex optimization with optimal complexity

A regularization algorithm using inexact function values and inexact der...
research
11/09/2018

Deterministic and stochastic inexact regularization algorithms for nonconvex ptimization with optimal complexity

A regularization algorithm using inexact function values and inexact der...
research
07/27/2020

Stochastic Gradient Descent applied to Least Squares regularizes in Sobolev spaces

We study the behavior of stochastic gradient descent applied to Ax -b _2...
research
06/17/2021

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can...

Please sign up or login with your details

Forgot password? Click here to reset