Long-term Recurrent Convolutional Networks for Visual Recognition and Description

11/17/2014
by   Jeff Donahue, et al.
0

Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep"' in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.

READ FULL TEXT

page 1

page 4

page 12

research
03/22/2019

On the Importance of Video Action Recognition for Visual Lipreading

We focus on the word-level visual lipreading, which requires to decode t...
research
10/10/2020

Diagnosing and Preventing Instabilities in Recurrent Video Processing

Recurrent models are becoming a popular choice for video enhancement tas...
research
07/29/2018

Towards Automatic Speech Identification from Vocal Tract Shape Dynamics in Real-time MRI

Vocal tract configurations play a vital role in generating distinguishab...
research
07/19/2022

Time Is MattEr: Temporal Self-supervision for Video Transformers

Understanding temporal dynamics of video is an essential aspect of learn...
research
10/11/2021

EchoVPR: Echo State Networks for Visual Place Recognition

Recognising previously visited locations is an important, but unsolved, ...
research
07/03/2019

Compositional Structure Learning for Sequential Video Data

Conventional sequential learning methods such as Recurrent Neural Networ...

Please sign up or login with your details

Forgot password? Click here to reset