D3D: Distilled 3D Networks for Video Action Recognition

by   Jonathan C. Stroud, et al.

State-of-the-art methods for video action recognition commonly use an ensemble of two networks: the spatial stream, which takes RGB frames as input, and the temporal stream, which takes optical flow as input. In recent work, both of these streams consist of 3D Convolutional Neural Networks, which apply spatiotemporal filters to the video clip before performing classification. Conceptually, the temporal filters should allow the spatial stream to learn motion representations, making the temporal stream redundant. However, we still see significant benefits in action recognition performance by including an entirely separate temporal stream, indicating that the spatial stream is "missing" some of the signal captured by the temporal stream. In this work, we first investigate whether motion representations are indeed missing in the spatial stream of 3D CNNs. Second, we demonstrate that these motion representations can be improved by distillation, by tuning the spatial stream to predict the outputs of the temporal stream, effectively combining both models into a single stream. Finally, we show that our Distilled 3D Network (D3D) achieves performance on par with two-stream approaches, using only a single model and with no need to compute optical flow.


page 6

page 11

page 12


Flow-Distilled IP Two-Stream Networks for Compressed Video Action Recognition

Two-stream networks have achieved great success in video recognition. A ...

Multi-Stream Single Shot Spatial-Temporal Action Detection

We present a 3D Convolutional Neural Networks (CNNs) based single shot d...

Investigation on Combining 3D Convolution of Image Data and Optical Flow to Generate Temporal Action Proposals

In this paper, a novel two-stream architecture for the task of temporal ...

PERF-Net: Pose Empowered RGB-Flow Net

In recent years, many works in the video action recognition literature h...

Reversing Two-Stream Networks with Decoding Discrepancy Penalty for Robust Action Recognition

We discuss the robustness and generalization ability in the realm of act...

SegCodeNet: Color-Coded Segmentation Masks for Activity Detection from Wearable Cameras

Activity detection from first-person videos (FPV) captured using a weara...

Attention Filtering for Multi-person Spatiotemporal Action Detection on Deep Two-Stream CNN Architectures

Action detection and recognition tasks have been the target of much focu...

Please sign up or login with your details

Forgot password? Click here to reset