MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks

11/18/2017
by   Minmin Chen, et al.
0

We introduce MinimalRNN, a new recurrent neural network architecture that achieves comparable performance as the popular gated RNNs with a simplified structure. It employs minimal updates within RNN, which not only leads to efficient learning and testing but more importantly better interpretability and trainability. We demonstrate that by endorsing the more restrictive update rule, MinimalRNN learns disentangled RNN states. We further examine the learning dynamics of different RNN structures using input-output Jacobians, and show that MinimalRNN is able to capture longer range dependencies than existing RNN architectures.

READ FULL TEXT
research
12/19/2016

A recurrent neural network without chaos

We introduce an exceptionally simple gated recurrent neural network (RNN...
research
11/06/2017

Neural Speed Reading via Skim-RNN

Inspired by the principles of speed reading, we introduce Skim-RNN, a re...
research
06/22/2021

Recurrent Neural Network from Adder's Perspective: Carry-lookahead RNN

The recurrent network architecture is a widely used model in sequence mo...
research
10/25/2018

Learning with Interpretable Structure from RNN

In structure learning, the output is generally a structure that is used ...
research
05/05/2020

Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data

Recurrent Neural Networks (RNNs) are popular models of brain function. T...
research
02/14/2020

Dynamic Systems Simulation and Control Using Consecutive Recurrent Neural Networks

In this paper, we introduce a novel architecture to connecting adaptive ...
research
11/28/2016

Input Switched Affine Networks: An RNN Architecture Designed for Interpretability

There exist many problem domains where the interpretability of neural ne...

Please sign up or login with your details

Forgot password? Click here to reset