Belief Flows of Robust Online Learning

05/26/2015
by   Pedro A. Ortega, et al.
0

This paper introduces a new probabilistic model for online learning which dynamically incorporates information from stochastic gradients of an arbitrary loss function. Similar to probabilistic filtering, the model maintains a Gaussian belief over the optimal weight parameters. Unlike traditional Bayesian updates, the model incorporates a small number of gradient evaluations at locations chosen using Thompson sampling, making it computationally tractable. The belief is then transformed via a linear flow field which optimally updates the belief distribution using rules derived from information theoretic principles. Several versions of the algorithm are shown using different constraints on the flow field and compared with conventional online learning algorithms. Results are given for several classification tasks including logistic regression and multilayer neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2023

Optimality of Robust Online Learning

In this paper, we study an online learning algorithm with a robust loss ...
research
11/03/2009

Slow Learners are Fast

Online learning algorithms have impressive convergence properties when i...
research
06/13/2011

Efficient Transductive Online Learning via Randomized Rounding

Most traditional online learning algorithms are based on variants of mir...
research
03/02/2015

Unregularized Online Learning Algorithms with General Loss Functions

In this paper, we consider unregularized online learning algorithms in a...
research
09/26/2019

Factored Probabilistic Belief Tracking

The problem of belief tracking in the presence of stochastic actions and...
research
07/11/2012

Mixtures of Deterministic-Probabilistic Networks and their AND/OR Search Space

The paper introduces mixed networks, a new framework for expressing and ...
research
01/28/2021

Low Complexity Approximate Bayesian Logistic Regression for Sparse Online Learning

Theoretical results show that Bayesian methods can achieve lower bounds ...

Please sign up or login with your details

Forgot password? Click here to reset