-
A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models
This paper aims to provide a selective survey about knowledge distillati...
read it
-
Action Concept Grounding Network for Semantically-Consistent Video Generation
Recent works in self-supervised video prediction have mainly focused on ...
read it
-
Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN
Distilling knowledge from huge pre-trained networks to improve the perfo...
read it
-
Collaborative Distillation in the Parameter and Spectrum Domains for Video Action Recognition
Recent years have witnessed the significant progress of action recogniti...
read it
-
Knowledge Distillation for Action Anticipation via Label Smoothing
Human capability to anticipate near future from visual observations and ...
read it
-
Imitation networks: Few-shot learning of neural networks from scratch
In this paper, we propose imitation networks, a simple but effective met...
read it
-
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier
In this era of digital information explosion, an abundance of data from ...
read it
Back to the Future: Knowledge Distillation for Human Action Anticipation
We consider the task of training a neural network to anticipate human actions in video. This task is challenging given the complexity of video data, the stochastic nature of the future, and the limited amount of annotated training data. In this paper, we propose a novel knowledge distillation framework that uses an action recognition network to supervise the training of an action anticipation network, guiding the latter to attend to the relevant information needed for correctly anticipating the future actions. This framework is possible thanks to a novel loss function to account for positional shifts of semantic concepts in a dynamic video. The knowledge distillation framework is a form of self-supervised learning, and it takes advantage of unlabeled data. Experimental results on JHMDB and EPIC-KITCHENS dataset show the effectiveness of our approach.
READ FULL TEXT
Comments
There are no comments yet.