Neural GPUs Learn Algorithms

by   Łukasz Kaiser, et al.

Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.


page 1

page 2

page 3

page 4


Extensions and Limitations of the Neural GPU

The Neural GPU is a recent model that can learn algorithms such as multi...

Algorithmic Complexities in Backpropagation and Tropical Neural Networks

In this note, we propose a novel technique to reduce the algorithmic com...

Improving the Neural GPU Architecture for Algorithm Learning

Algorithm learning is a core problem in artificial intelligence with sig...

Learning Numeracy: Binary Arithmetic with Neural Turing Machines

One of the main problems encountered so far with recurrent neural networ...

Neural Execution Engines: Learning to Execute Subroutines

A significant effort has been made to train neural networks that replica...

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks

Deep neural networks are powerful machines for visual pattern recognitio...

Adding Gradient Noise Improves Learning for Very Deep Networks

Deep feedforward and recurrent networks have achieved impressive results...

Please sign up or login with your details

Forgot password? Click here to reset