A Stable High-order Tuner for General Convex Functions

11/19/2020
by   José M. Moreu, et al.
0

Iterative gradient-based algorithms have been increasingly applied for the training of a broad variety of machine learning models including large neural-nets. In particular, momentum-based methods, with accelerated learning guarantees, have received a lot of attention due to their provable guarantees of fast learning in certain classes of problems and multiple algorithms have been derived. However, properties for these methods hold true only for constant regressors. When time-varying regressors occur, which is commonplace in dynamic systems, many of these momentum-based methods cannot guarantee stability. Recently, a new High-order Tuner (HT) was developed and shown to have 1) stability and asymptotic convergence for time-varying regressors and 2) non-asymptotic accelerated learning guarantees for constant regressors. These results were derived for a linear regression framework producing a quadratic loss function. In this paper, we extend and discuss the results of this same HT for general convex loss functions. Through the exploitation of convexity and smoothness definitions, we establish similar stability and asymptotic convergence guarantees. Additionally we conjecture that the HT has an accelerated convergence rate. Finally, we provide numerical simulations supporting the satisfactory behavior of the HT algorithm as well as the conjecture of accelerated learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2020

Accelerated Learning with Robustness to Adversarial Regressors

High order iterative momentum-based parameter update algorithms have see...
research
03/23/2021

A High-order Tuner for Accelerated Learning and Control

Gradient-descent based iterative algorithms pervade a variety of problem...
research
05/08/2023

Accelerated Algorithms for a Class of Optimization Problems with Equality and Box Constraints

Convex optimization with equality and inequality constraints is a ubiqui...
research
11/24/2020

Shuffling Gradient-Based Methods with Momentum

We combine two advanced ideas widely used in optimization for machine le...
research
03/12/2019

Accelerated Learning in the Presence of Time Varying Features with Applications to Machine Learning and Adaptive Control

Features in machine learning problems are often time varying and may be ...
research
02/03/2021

The Instability of Accelerated Gradient Descent

We study the algorithmic stability of Nesterov's accelerated gradient me...
research
11/21/2022

Some Numerical Simulations Based on Dacorogna Example Functions in Favor of Morrey Conjecture

Morrey Conjecture deals with two properties of functions which are known...

Please sign up or login with your details

Forgot password? Click here to reset