Online Learning with Gated Linear Networks

12/05/2017
by   Joel Veness, et al.
0

This paper describes a family of probabilistic architectures designed for online learning under the logarithmic loss. Rather than relying on non-linear transfer functions, our method gains representational power by the use of data conditioning. We state under general conditions a learnable capacity theorem that shows this approach can in principle learn any bounded Borel-measurable function on a compact subset of euclidean space; the result is stronger than many universality results for connectionist architectures because we provide both the model and the learning procedure for which convergence is guaranteed.

READ FULL TEXT

page 24

page 26

research
11/03/2009

Slow Learners are Fast

Online learning algorithms have impressive convergence properties when i...
research
02/08/2023

On Computable Online Learning

We initiate a study of computable online (c-online) learning, which we a...
research
09/30/2019

Gated Linear Networks

This paper presents a family of backpropagation-free neural architecture...
research
03/13/2020

Identification of AC Networks via Online Learning

The increasing integration of intermittent renewable generation in power...
research
01/16/2022

Universal Online Learning: an Optimistically Universal Learning Rule

We study the subject of universal online learning with non-i.i.d. proces...
research
03/20/2018

Online Learning: Sufficient Statistics and the Burkholder Method

We uncover a fairly general principle in online learning: If regret can ...
research
11/25/2021

A Letter on Convergence of In-Parameter-Linear Nonlinear Neural Architectures with Gradient Learnings

This letter summarizes and proves the concept of bounded-input bounded-s...

Please sign up or login with your details

Forgot password? Click here to reset