A High-order Tuner for Accelerated Learning and Control

03/23/2021
by   Spencer McDonald, et al.
0

Gradient-descent based iterative algorithms pervade a variety of problems in estimation, prediction, learning, control, and optimization. Recently iterative algorithms based on higher-order information have been explored in an attempt to lead to accelerated learning. In this paper, we explore a specific a high-order tuner that has been shown to result in stability with time-varying regressors in linearly parametrized systems, and accelerated convergence with constant regressors. We show that this tuner continues to provide bounded parameter estimates even if the gradients are corrupted by noise. Additionally, we also show that the parameter estimates converge exponentially to a compact set whose size is dependent on noise statistics. As the HT algorithms can be applied to a wide range of problems in estimation, filtering, control, and machine learning, the result obtained in this paper represents an important extension to the topic of real-time and fast decision making.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2020

A Stable High-order Tuner for General Convex Functions

Iterative gradient-based algorithms have been increasingly applied for t...
research
03/12/2019

Accelerated Learning in the Presence of Time Varying Features with Applications to Machine Learning and Adaptive Control

Features in machine learning problems are often time varying and may be ...
research
05/08/2023

Accelerated Algorithms for a Class of Optimization Problems with Equality and Box Constraints

Convex optimization with equality and inequality constraints is a ubiqui...
research
05/04/2020

Accelerated Learning with Robustness to Adversarial Regressors

High order iterative momentum-based parameter update algorithms have see...
research
12/31/2019

Higher-order algorithms for nonlinearly parameterized adaptive control

A set of new adaptive control algorithms is presented. The algorithms ar...
research
05/31/2018

On Acceleration with Noise-Corrupted Gradients

Accelerated algorithms have broad applications in large-scale optimizati...
research
05/07/2019

Accelerated Target Updates for Q-learning

This paper studies accelerations in Q-learning algorithms. We propose an...

Please sign up or login with your details

Forgot password? Click here to reset