Deterministic and stochastic inexact regularization algorithms for nonconvex optimization with optimal complexity

11/09/2018
by   S. Bellavia, et al.
0

A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is β-Hölder continuous. It features a very flexible adaptive mechanism for determining the inexactness which is allowed, at each iteration, when computing objective function values and derivatives. The complexity analysis covers arbitrary optimality order and arbitrary degree of available approximate derivatives. It extends results of Cartis, Gould and Toint (2018) on the evaluation complexity to the inexact case: if a qth order minimizer is sought using approximations to the first p derivatives, it is proved that a suitable approximate minimizer within ϵ is computed by the proposed algorithm in at most O(ϵ^-p+β/p-q+β) iterations and at most O(|(ϵ)|ϵ^-p+β/p-q+β) approximate evaluations. While the proposed framework remains so far conceptual for high degrees and orders, it is shown to yield simple and computationally realistic inexact methods when specialized to the unconstrained and bound-constrained first- and second-order cases. The deterministic complexity results are finally extended to the stochastic context, yielding adaptive sample-size rules for subsampling methods typical of machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2018

Deterministic and stochastic inexact regularization algorithms for nonconvex ptimization with optimal complexity

A regularization algorithm using inexact function values and inexact der...
research
11/09/2018

Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization

A regularization algorithm using inexact function values and inexact der...
research
11/03/2018

Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

We provide sharp worst-case evaluation complexity bounds for nonconvex m...
research
08/14/2017

Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The unconstrained minimization of a sufficiently smooth objective functi...
research
04/06/2021

The Impact of Noise on Evaluation Complexity: The Deterministic Trust-Region Case

Intrinsic noise in objective function and derivatives evaluations may ca...
research
05/20/2017

Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization

Necessary conditions for high-order optimality in smooth nonlinear const...
research
04/06/2021

Hölder Gradient Descent and Adaptive Regularization Methods in Banach Spaces for First-Order Points

This paper considers optimization of smooth nonconvex functionals in smo...

Please sign up or login with your details

Forgot password? Click here to reset