Online Robust and Adaptive Learning from Data Streams

07/23/2020
by   Shintaro Fukushima, et al.
0

In online learning from non-stationary data streams, it is both necessary to learn robustly to outliers and to adapt to changes of underlying data generating mechanism quickly. In this paper, we refer to the former nature of online learning algorithms as robustness and the latter as adaptivity. There is an obvious tradeoff between them. It is a fundamental issue to quantify and evaluate the tradeoff because it provides important information on the data generating mechanism. However, no previous work has considered the tradeoff quantitatively. We propose a novel algorithm called the Stochastic approximation-based Robustness-Adaptivity algorithm (SRA) to evaluate the tradeoff. The key idea of SRA is to update parameters of distribution or sufficient statistics with the biased stochastic approximation scheme, while dropping data points with large values of the stochastic update. We address the relation between two parameters, one of which is the step size of the stochastic approximation, and the other is the threshold parameter of the norm of the stochastic update. The former controls the adaptivity and the latter does the robustness. We give a theoretical analysis for the non-asymptotic convergence of SRA in the presence of outliers, which depends on both the step size and the threshold parameter. Since SRA is formulated on the majorization-minimization principle, it is a general algorithm including many algorithms, such as the online EM algorithm and stochastic gradient descent. Empirical experiments for both synthetic and real datasets demonstrated that SRA was superior to previous methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2019

Meta-descent for Online, Continual Prediction

This paper investigates different vector step-size adaptation approaches...
research
10/10/2017

Fast and Strong Convergence of Online Learning Algorithms

In this paper, we study the online learning algorithm without explicit r...
research
12/12/2014

Adaptive Stochastic Gradient Descent on the Grassmannian for Robust Low-Rank Subspace Recovery and Clustering

In this paper, we present GASG21 (Grassmannian Adaptive Stochastic Gradi...
research
05/10/2018

Metatrace: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control

Reinforcement learning (RL) has had many successes in both "deep" and "s...
research
02/21/2018

Nonlinear Online Learning with Adaptive Nyström Approximation

Use of nonlinear feature maps via kernel approximation has led to succes...
research
03/04/2020

Adaptation in Online Social Learning

This work studies social learning under non-stationary conditions. Altho...
research
03/07/2017

Online Learning Without Prior Information

The vast majority of optimization and online learning algorithms today r...

Please sign up or login with your details

Forgot password? Click here to reset