Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks

03/21/2018
by   Daniel Fojo, et al.
0

Adaptive Computation Time for Recurrent Neural Networks (ACT) is one of the most promising architectures for variable computation. ACT adapts to the input sequence by being able to look at each sample more than once, and learn how many times it should do it. In this paper, we compare ACT to Repeat-RNN, a novel architecture based on repeating each sample a fixed number of times. We found surprising results, where Repeat-RNN performs as good as ACT in the selected tasks. Source code in TensorFlow and PyTorch is publicly available at https://imatge-upc.github.io/danifojo-2018-repeatrnn/

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2016

Adaptive Computation Time for Recurrent Neural Networks

This paper introduces Adaptive Computation Time (ACT), an algorithm that...
research
08/22/2017

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks

Recurrent Neural Networks (RNNs) continue to show outstanding performanc...
research
10/05/2017

Dilated Recurrent Neural Networks

Learning with recurrent neural networks (RNNs) on long sequences is a no...
research
07/30/2020

Rethinking Recurrent Neural Networks and other Improvements for Image Classification

For a long history of Machine Learning which dates back to several decad...
research
12/06/2018

Layer Flexible Adaptive Computational Time for Recurrent Neural Networks

Deep recurrent neural networks show significant benefits in prediction t...
research
02/18/2016

Convolutional RNN: an Enhanced Model for Extracting Features from Sequential Data

Traditional convolutional layers extract features from patches of data b...
research
04/27/2020

Differentiable Adaptive Computation Time for Visual Reasoning

This paper presents a novel attention-based algorithm for achieving adap...

Please sign up or login with your details

Forgot password? Click here to reset