Deep Online Convex Optimization by Putting Forecaster to Sleep

09/06/2015
by   David Balduzzi, et al.
0

Methods from convex optimization such as accelerated gradient descent are widely used as building blocks for deep learning algorithms. However, the reasons for their empirical success are unclear, since neural networks are not convex and standard guarantees do not apply. This paper develops the first rigorous link between online convex optimization and error backpropagation on convolutional networks. The first step is to introduce circadian games, a mild generalization of convex games with similar convergence properties. The main result is that error backpropagation on a convolutional network is equivalent to playing out a circadian game. It follows immediately that the waking-regret of players in the game (the units in the neural network) controls the overall rate of convergence of the network. Finally, we explore some implications of the results: (i) we describe the representations learned by a neural network game-theoretically, (ii) propose a learning setting at the level of individual units that can be plugged into deep architectures, and (iii) propose a new approach to adaptive model selection by applying bandit algorithms to choose which players to wake on each round.

READ FULL TEXT
research
04/07/2016

Deep Online Convex Optimization with Gated Games

Methods from convex optimization are widely used as building blocks for ...
research
11/07/2016

Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks

Modern convolutional networks, incorporating rectifiers and max-pooling,...
research
05/15/2019

Predictive Online Convex Optimization

We incorporate future information in the form of the estimated value of ...
research
11/22/2021

No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization

We develop an algorithmic framework for solving convex optimization prob...
research
12/10/2014

Convergence and rate of convergence of some greedy algorithms in convex optimization

The paper gives a systematic study of the approximate versions of three ...
research
09/30/2019

Gated Linear Networks

This paper presents a family of backpropagation-free neural architecture...
research
05/15/2023

Convex optimization over a probability simplex

We propose a new iteration scheme, the Cauchy-Simplex, to optimize conve...

Please sign up or login with your details

Forgot password? Click here to reset