D3D: Distilled 3D Networks for Video Action Recognition

12/19/2018
by   Jonathan C. Stroud, et al.
10

State-of-the-art methods for video action recognition commonly use an ensemble of two networks: the spatial stream, which takes RGB frames as input, and the temporal stream, which takes optical flow as input. In recent work, both of these streams consist of 3D Convolutional Neural Networks, which apply spatiotemporal filters to the video clip before performing classification. Conceptually, the temporal filters should allow the spatial stream to learn motion representations, making the temporal stream redundant. However, we still see significant benefits in action recognition performance by including an entirely separate temporal stream, indicating that the spatial stream is "missing" some of the signal captured by the temporal stream. In this work, we first investigate whether motion representations are indeed missing in the spatial stream of 3D CNNs. Second, we demonstrate that these motion representations can be improved by distillation, by tuning the spatial stream to predict the outputs of the temporal stream, effectively combining both models into a single stream. Finally, we show that our Distilled 3D Network (D3D) achieves performance on par with two-stream approaches, using only a single model and with no need to compute optical flow.

READ FULL TEXT

page 6

page 11

page 12

research
12/10/2019

Flow-Distilled IP Two-Stream Networks for Compressed Video Action Recognition

Two-stream networks have achieved great success in video recognition. A ...
research
08/22/2019

Multi-Stream Single Shot Spatial-Temporal Action Detection

We present a 3D Convolutional Neural Networks (CNNs) based single shot d...
research
03/11/2019

Investigation on Combining 3D Convolution of Image Data and Optical Flow to Generate Temporal Action Proposals

In this paper, a novel two-stream architecture for the task of temporal ...
research
09/28/2020

PERF-Net: Pose Empowered RGB-Flow Net

In recent years, many works in the video action recognition literature h...
research
11/20/2018

Reversing Two-Stream Networks with Decoding Discrepancy Penalty for Robust Action Recognition

We discuss the robustness and generalization ability in the realm of act...
research
08/19/2020

SegCodeNet: Color-Coded Segmentation Masks for Activity Detection from Wearable Cameras

Activity detection from first-person videos (FPV) captured using a weara...
research
07/21/2019

Attention Filtering for Multi-person Spatiotemporal Action Detection on Deep Two-Stream CNN Architectures

Action detection and recognition tasks have been the target of much focu...

Please sign up or login with your details

Forgot password? Click here to reset