Characterizing Parametric and Convergence Stability in Nonconvex and Nonsmooth Optimizations: A Geometric Approach

04/04/2022
by   Xiaotie Deng, et al.
0

We consider stability issues in minimizing a continuous (probably parameterized, nonconvex and nonsmooth) real-valued function f. We call a point stationary if all its possible directional derivatives are nonnegative. In this work, we focus on two notions of stability on stationary points of f: parametric stability and convergence stability. Parametric considerations are widely studied in various fields, including smoothed analysis, numerical stability, condition numbers and sensitivity analysis for linear programming. Parametric stability asks whether minor perturbations on parameters lead to dramatic changes in the position and f value of a stationary point. Meanwhile, convergence stability indicates a non-escapable solution: Any point sequence iteratively produced by an optimization algorithm cannot escape from a neighborhood of a stationary point but gets close to it in the sense that such stationary points are stable to the precision parameter and algorithmic numerical errors. It turns out that these notions have deep connections to geometry theory. We show that parametric stability is linked to deformations of graphs of functions. On the other hand, convergence stability is concerned with area partitioning of the function domain. Utilizing these connections, we prove quite tight conditions of these two stability notions for a wide range of functions and optimization algorithms with small enough step sizes and precision parameters. These conditions are subtle in the sense that a slightly weaker function requirement goes to the opposite of primitive intuitions and leads to wrong conclusions. We present three applications of this theory. These applications reveal some understanding on Nash equilibrium computation, nonconvex and nonsmooth optimization, as well as the new optimization methodology of deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2014

Global convergence of splitting methods for nonconvex composite optimization

We consider the problem of minimizing the sum of a smooth function h wit...
research
12/07/2020

Convergence of block coordinate descent with diminishing radius for nonconvex optimization

Block coordinate descent (BCD), also known as nonlinear Gauss-Seidel, is...
research
09/04/2023

On the non-global linear stability and spurious fixed points of MPRK schemes with negative RK parameters

Recently, a stability theory has been developed to study the linear stab...
research
05/23/2023

Robust non-computability and stability of dynamical systems

In this paper, we examine the relationship between the stability of the ...
research
09/30/2014

Douglas-Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems

We adapt the Douglas-Rachford (DR) splitting method to solve nonconvex f...
research
09/12/2017

A convergence frame for inexact nonconvex and nonsmooth algorithms and its applications to several iterations

In this paper, we consider the convergence of an abstract inexact noncon...

Please sign up or login with your details

Forgot password? Click here to reset