Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets

03/03/2015
by   Armand Joulin, et al.
0

Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory.

READ FULL TEXT
research
04/05/2018

Review of Deep Learning

In recent years, China, the United States and other countries, Google an...
research
07/01/2019

Understanding Memory Modules on Learning Simple Algorithms

Recent work has shown that memory modules are crucial for the generaliza...
research
06/07/2023

Long Sequence Hopfield Memory

Sequence memory is an essential attribute of natural and artificial inte...
research
09/19/2017

Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization

Learning to remember long sequences remains a challenging task for recur...
research
01/24/2018

PRNN: Recurrent Neural Network with Persistent Memory

Although Recurrent Neural Network (RNN) has been a powerful tool for mod...
research
08/22/2016

Surprisal-Driven Feedback in Recurrent Networks

Recurrent neural nets are widely used for predicting temporal data. Thei...
research
09/15/2017

Dynamic Capacity Estimation in Hopfield Networks

Understanding the memory capacity of neural networks remains a challengi...

Please sign up or login with your details

Forgot password? Click here to reset