Accelerated Learning with Robustness to Adversarial Regressors

05/04/2020
by   Joseph E. Gaudio, et al.
0

High order iterative momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches and continuous dynamics have led to the derivation of new classes of high order learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need in continual/lifelong learning applications for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors. In such settings, the learning algorithm must continually adapt to changes in the distribution of regressors. In this paper, we propose a new discrete time algorithm which: 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from adaptive control theory and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an ϵ sub-optimal point in at most Õ(1/√(ϵ)) iterations when regressors are constant - matching lower bounds due to Nesterov of Ω(1/√(ϵ)), up to a log(1/ϵ) factor and provides guaranteed bounds for stability when regressors are time-varying.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset