Adding Gradient Noise Improves Learning for Very Deep Networks

11/21/2015
by   Arvind Neelakantan, et al.
0

Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement technique of adding gradient noise which we find to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overfitting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient descent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72 baseline on a challenging question-answering task, and a doubling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2017

Long-Term Memory Networks for Question Answering

Question answering is an important and difficult task in the natural lan...
research
07/22/2015

Training Very Deep Networks

Theoretical and empirical evidence indicates that the depth of neural ne...
research
10/09/2016

Open-Ended Visual Question-Answering

This thesis report studies methods to solve Visual Question-Answering (V...
research
11/16/2015

Neural Programmer: Inducing Latent Programs with Gradient Descent

Deep neural networks have achieved impressive supervised classification ...
research
08/19/2017

A Data and Model-Parallel, Distributed and Scalable Framework for Training of Deep Networks in Apache Spark

Training deep networks is expensive and time-consuming with the training...
research
11/25/2015

Neural GPUs Learn Algorithms

Learning an algorithm from examples is a fundamental problem that has be...

Please sign up or login with your details

Forgot password? Click here to reset