Nonstationary Nonparametric Online Learning: Balancing Dynamic Regret and Model Parsimony

09/12/2019
by   Amrit Singh Bedi, et al.
0

An open challenge in supervised learning is conceptual drift: a data point begins as classified according to one label, but over time the notion of that label changes. Beyond linear autoregressive models, transfer and meta learning address drift, but require data that is representative of disparate domains at the outset of training. To relax this requirement, we propose a memory-efficient online universal function approximator based on compressed kernel methods. Our approach hinges upon viewing non-stationary learning as online convex optimization with dynamic comparators, for which performance is quantified by dynamic regret. Prior works control dynamic regret growth only for linear models. In contrast, we hypothesize actions belong to reproducing kernel Hilbert spaces (RKHS). We propose a functional variant of online gradient descent (OGD) operating in tandem with greedy subspace projections. Projections are necessary to surmount the fact that RKHS functions have complexity proportional to time. For this scheme, we establish sublinear dynamic regret growth in terms of both loss variation and functional path length, and that the memory of the function sequence remains moderate. Experiments demonstrate the usefulness of the proposed technique for online nonlinear regression and classification problems with non-stationary data.

READ FULL TEXT
research
09/16/2023

Efficient Methods for Non-stationary Online Learning

Non-stationary online learning has drawn much attention in recent years....
research
02/07/2021

Non-stationary Online Learning with Memory and Non-stochastic Control

We study the problem of Online Convex Optimization (OCO) with memory, wh...
research
03/01/2013

Second-Order Non-Stationary Online Learning for Regression

The goal of a learner, in standard online learning, is to have the cumul...
research
02/25/2022

Dynamic Regret of Online Mirror Descent for Relatively Smooth Convex Cost Functions

The performance of online convex optimization algorithms in a dynamic en...
research
05/20/2023

Non-stationary Online Convex Optimization with Arbitrary Delays

Online convex optimization (OCO) with arbitrary delays, in which gradien...
research
12/28/2017

Online Ensemble Multi-kernel Learning Adaptive to Non-stationary and Adversarial Environments

Kernel-based methods exhibit well-documented performance in various nonl...
research
10/11/2017

Decentralized Online Learning with Kernels

We consider multi-agent stochastic optimization problems over reproducin...

Please sign up or login with your details

Forgot password? Click here to reset