Understanding Feature Selection and Feature Memorization in Recurrent Neural Networks

03/03/2019
by   Bokang Zhu, et al.
0

In this paper, we propose a test, called Flagged-1-Bit (F1B) test, to study the intrinsic capability of recurrent neural networks in sequence learning. Four different recurrent network models are studied both analytically and experimentally using this test. Our results suggest that in general there exists a conflict between feature selection and feature memorization in sequence learning. Such a conflict can be resolved either using a gating mechanism as in LSTM, or by increasing the state dimension as in Vanilla RNN. Gated models resolve this conflict by adaptively adjusting their state-update equations, whereas Vanilla RNN resolves this conflict by assigning different dimensions different tasks. Insights into feature selection and memorization in recurrent networks are given.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2017

Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks

The paper evaluates three variants of the Gated Recurrent Unit (GRU) in ...
research
02/26/2020

Refined Gate: A Simple and Effective Gating Mechanism for Recurrent Units

Recurrent neural network (RNN) has been widely studied in sequence learn...
research
07/25/2021

Identifying the fragment structure of the organic compounds by deeply learning the original NMR data

We preprocess the raw NMR spectrum and extract key characteristic featur...
research
11/25/2018

Guided Feature Selection for Deep Visual Odometry

We present a novel end-to-end visual odometry architecture with guided f...
research
04/01/2019

The Impact of Extraneous Variables on the Performance of Recurrent Neural Network Models in Clinical Tasks

Electronic Medical Records (EMR) are a rich source of patient informatio...

Please sign up or login with your details

Forgot password? Click here to reset