Learning from Videos with Deep Convolutional LSTM Networks

04/09/2019
by   Logan Courtney, et al.
0

This paper explores the use of convolution LSTMs to simultaneously learn spatial- and temporal-information in videos. A deep network of convolutional LSTMs allows the model to access the entire range of temporal information at all spatial scales of the data. We describe our experiments involving convolution LSTMs for lipreading that demonstrate the model is capable of selectively choosing which spatiotemporal scales are most relevant for a particular dataset. The proposed deep architecture also holds promise in other applications where spatiotemporal features play a vital role without having to specifically cater the design of the network for the particular spatiotemporal features existent within the problem. For the Lip Reading in the Wild (LRW) dataset, our model slightly outperforms the previous state of the art (83.4 vs. 83.0 pretrained on the Lip Reading Sentences (LRS2) dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset