-
CTM: Collaborative Temporal Modeling for Action Recognition
With the rapid development of digital multimedia, video understanding ha...
read it
-
Bridging the gap between Human Action Recognition and Online Action Detection
Action recognition, early prediction, and online action detection are co...
read it
-
Cross-modal knowledge distillation for action recognition
In this work, we address the problem how a network for action recognitio...
read it
-
Back to the Future: Knowledge Distillation for Human Action Anticipation
We consider the task of training a neural network to anticipate human ac...
read it
-
Temporal Sequence Distillation: Towards Few-Frame Action Recognition in Videos
Video Analytics Software as a Service (VA SaaS) has been rapidly growing...
read it
-
DistInit: Learning Video Representations without a Single Labeled Video
Video recognition models have progressed significantly over the past few...
read it
-
Action Recognition in the Frequency Domain
In this paper, we describe a simple strategy for mitigating variability ...
read it
Collaborative Distillation in the Parameter and Spectrum Domains for Video Action Recognition
Recent years have witnessed the significant progress of action recognition task with deep networks. However, most of current video networks require large memory and computational resources, which hinders their applications in practice. Existing knowledge distillation methods are limited to the image-level spatial domain, ignoring the temporal and frequency information which provide structural knowledge and are important for video analysis. This paper explores how to train small and efficient networks for action recognition. Specifically, we propose two distillation strategies in the frequency domain, namely the feature spectrum and parameter distribution distillations respectively. Our insight is that appealing performance of action recognition requires explicitly modeling the temporal frequency spectrum of video features. Therefore, we introduce a spectrum loss that enforces the student network to mimic the temporal frequency spectrum from the teacher network, instead of implicitly distilling features as many previous works. Second, the parameter frequency distribution is further adopted to guide the student network to learn the appearance modeling process from the teacher. Besides, a collaborative learning strategy is presented to optimize the training process from a probabilistic view. Extensive experiments are conducted on several action recognition benchmarks, such as Kinetics, Something-Something, and Jester, which consistently verify effectiveness of our approach, and demonstrate that our method can achieve higher performance than state-of-the-art methods with the same backbone.
READ FULL TEXT
Comments
There are no comments yet.