Gated Linear Networks

09/30/2019
by   Joel Veness, et al.
38

This paper presents a family of backpropagation-free neural architectures, Gated Linear Networks (GLNs),that are well suited to online learning applications where sample efficiency is of paramount importance. The impressive empirical performance of these architectures has long been known within the data compression community, but a theoretically satisfying explanation as to how and why they perform so well has proven difficult. What distinguishes these architectures from other neural systems is the distributed and local nature of their credit assignment mechanism; each neuron directly predicts the target and has its own set of hard-gated weights that are locally adapted via online convex optimization. By providing an interpretation, generalization and subsequent theoretical analysis, we show that sufficiently large GLNs are universal in a strong sense: not only can they model any compactly supported, continuous density function to arbitrary accuracy, but that any choice of no-regret online convex optimization technique will provably converge to the correct solution with enough data. Empirically we show a collection of single-pass learning results on established machine learning benchmarks that are competitive with results obtained with general purpose batch learning techniques.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

page 8

page 10

page 11

research
06/10/2020

Gaussian Gated Linear Networks

We propose the Gaussian Gated Linear Network (G-GLN), an extension to th...
research
12/05/2017

Online Learning with Gated Linear Networks

This paper describes a family of probabilistic architectures designed fo...
research
04/07/2016

Deep Online Convex Optimization with Gated Games

Methods from convex optimization are widely used as building blocks for ...
research
02/21/2020

Online Learning in Contextual Bandits using Gated Linear Networks

We introduce a new and completely online contextual bandit algorithm cal...
research
09/06/2015

Deep Online Convex Optimization by Putting Forecaster to Sleep

Methods from convex optimization such as accelerated gradient descent ar...
research
01/12/2020

Channel Assignment in Uplink Wireless Communication using Machine Learning Approach

This letter investigates a channel assignment problem in uplink wireless...
research
04/15/2020

Online Multiserver Convex Chasing and Optimization

We introduce the problem of k-chasing of convex functions, a simultaneou...

Please sign up or login with your details

Forgot password? Click here to reset