Deep RNN Framework for Visual Sequential Applications

11/25/2018
by   Bo Pang, et al.
0

Extracting temporal and representation features efficiently plays a pivotal role in understanding visual sequence information. To deal with this, we propose a new recurrent neural framework that can be stacked deep effectively. There are mainly two novel designs in our deep RNN framework: one is a new RNN module called Representation Bridge Module (RBM) which splits the information flowing along the sequence (temporal direction) and along depth (spatial representation direction), making it easier to train when building deep by balancing these two directions; the other is the Overlap Coherence Training Scheme that reduces the training complexity for long visual sequential tasks on account of the limitation of computing resources. We provide empirical evidence to show that our deep RNN framework is easy to optimize and can gain accuracy from the increased depth on several visual sequence problems. On these tasks, we evaluate our deep RNN framework with 15 layers, 7 times than conventional RNN networks, but it is still easy to train. Our deep framework achieves more than 11 RNN models on Kinetics, UCF-101, and HMDB-51 for video classification. For auxiliary annotation, after replacing the shallow RNN part of Polygon-RNN with our 15-layer deep RBM, the performance improves by 14.7 prediction, our deep RNN improves the state-of-the-art shallow model's performance by 2.4 accompanied by this paper.

READ FULL TEXT

page 4

page 11

page 12

research
12/20/2013

How to Construct Deep Recurrent Neural Networks

In this paper, we explore different ways to extend a recurrent neural ne...
research
12/26/2014

Polyphonic Music Generation by Modeling Temporal Dependencies Using a RNN-DBN

In this paper, we propose a generic technique to model temporal dependen...
research
03/18/2023

Powerful and Extensible WFST Framework for RNN-Transducer Losses

This paper presents a framework based on Weighted Finite-State Transduce...
research
06/07/2020

Fusion Recurrent Neural Network

Considering deep sequence learning for practical application, two repres...
research
05/17/2016

Deep Action Sequence Learning for Causal Shape Transformation

Deep learning became the method of choice in recent year for solving a w...
research
08/06/2019

Two-stage Training for Chinese Dialect Recognition

In this paper, we present a two-stage language identification (LID) syst...
research
03/13/2023

Learning Transductions and Alignments with RNN Seq2seq Models

The paper studies the capabilities of Recurrent-Neural-Network sequence ...

Please sign up or login with your details

Forgot password? Click here to reset