Accelerated Learning with Robustness to Adversarial Regressors

05/04/2020
by   Joseph E. Gaudio, et al.
0

High order iterative momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches and continuous dynamics have led to the derivation of new classes of high order learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need in continual/lifelong learning applications for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors. In such settings, the learning algorithm must continually adapt to changes in the distribution of regressors. In this paper, we propose a new discrete time algorithm which: 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from adaptive control theory and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an ϵ sub-optimal point in at most Õ(1/√(ϵ)) iterations when regressors are constant - matching lower bounds due to Nesterov of Ω(1/√(ϵ)), up to a log(1/ϵ) factor and provides guaranteed bounds for stability when regressors are time-varying.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2020

A Stable High-order Tuner for General Convex Functions

Iterative gradient-based algorithms have been increasingly applied for t...
research
03/12/2019

Accelerated Learning in the Presence of Time Varying Features with Applications to Machine Learning and Adaptive Control

Features in machine learning problems are often time varying and may be ...
research
05/07/2019

Accelerated Target Updates for Q-learning

This paper studies accelerations in Q-learning algorithms. We propose an...
research
05/08/2023

Accelerated Algorithms for a Class of Optimization Problems with Equality and Box Constraints

Convex optimization with equality and inequality constraints is a ubiqui...
research
03/23/2021

A High-order Tuner for Accelerated Learning and Control

Gradient-descent based iterative algorithms pervade a variety of problem...
research
04/11/2019

Connections Between Adaptive Control and Optimization in Machine Learning

This paper demonstrates many immediate connections between adaptive cont...
research
07/25/2022

Finite-Time Analysis of Asynchronous Q-learning under Diminishing Step-Size from Control-Theoretic View

Q-learning has long been one of the most popular reinforcement learning ...

Please sign up or login with your details

Forgot password? Click here to reset