Deep Online Convex Optimization with Gated Games

04/07/2016
by   David Balduzzi, et al.
0

Methods from convex optimization are widely used as building blocks for deep learning algorithms. However, the reasons for their empirical success are unclear, since modern convolutional networks (convnets), incorporating rectifier units and max-pooling, are neither smooth nor convex. Standard guarantees therefore do not apply. This paper provides the first convergence rates for gradient descent on rectifier convnets. The proof utilizes the particular structure of rectifier networks which consists in binary active/inactive gates applied on top of an underlying linear network. The approach generalizes to max-pooling, dropout and maxout. In other words, to precisely the neural networks that perform best empirically. The key step is to introduce gated games, an extension of convex games with similar convergence properties that capture the gating function of rectifiers. The main result is that rectifier convnets converge to a critical point at a rate controlled by the gated-regret of the units in the network. Corollaries of the main result include: (i) a game-theoretic description of the representations learned by a neural network; (ii) a logarithmic-regret algorithm for training neural nets; and (iii) a formal setting for analyzing conditional computation in neural nets that can be applied to recently developed models of attention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2015

Deep Online Convex Optimization by Putting Forecaster to Sleep

Methods from convex optimization such as accelerated gradient descent ar...
research
11/07/2016

Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks

Modern convolutional networks, incorporating rectifiers and max-pooling,...
research
01/20/2013

A Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic Optimization

Linear optimization is many times algorithmically simpler than non-linea...
research
09/30/2019

Gated Linear Networks

This paper presents a family of backpropagation-free neural architecture...
research
11/22/2021

No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization

We develop an algorithmic framework for solving convex optimization prob...
research
04/19/2019

Minimax Optimal Online Stochastic Learning for Sequences of Convex Functions under Sub-Gradient Observation Failures

We study online convex optimization under stochastic sub-gradient observ...
research
02/21/2018

Globally Consistent Algorithms for Mixture of Experts

Mixture-of-Experts (MoE) is a widely popular neural network architecture...

Please sign up or login with your details

Forgot password? Click here to reset