Back to the Future: Knowledge Distillation for Human Action Anticipation

04/09/2019
by   Vinh Tran, et al.
0

We consider the task of training a neural network to anticipate human actions in video. This task is challenging given the complexity of video data, the stochastic nature of the future, and the limited amount of annotated training data. In this paper, we propose a novel knowledge distillation framework that uses an action recognition network to supervise the training of an action anticipation network, guiding the latter to attend to the relevant information needed for correctly anticipating the future actions. This framework is possible thanks to a novel loss function to account for positional shifts of semantic concepts in a dynamic video. The knowledge distillation framework is a form of self-supervised learning, and it takes advantage of unlabeled data. Experimental results on JHMDB and EPIC-KITCHENS dataset show the effectiveness of our approach.

READ FULL TEXT

page 1

page 6

research
09/03/2022

A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition

Knowledge distillation is an effective transfer of knowledge from a heav...
research
08/18/2023

Unlimited Knowledge Distillation for Action Recognition in the Dark

Dark videos often lose essential information, which causes the knowledge...
research
03/25/2023

Multi-view knowledge distillation transformer for human action recognition

Recently, Transformer-based methods have been utilized to improve the pe...
research
02/17/2023

Explicit and Implicit Knowledge Distillation via Unlabeled Data

Data-free knowledge distillation is a challenging model lightweight task...
research
07/15/2023

SoccerKDNet: A Knowledge Distillation Framework for Action Recognition in Soccer Videos

Classifying player actions from soccer videos is a challenging problem, ...
research
08/10/2023

Towards General and Fast Video Derain via Knowledge Distillation

As a common natural weather condition, rain can obscure video frames and...
research
11/23/2020

Action Concept Grounding Network for Semantically-Consistent Video Generation

Recent works in self-supervised video prediction have mainly focused on ...

Please sign up or login with your details

Forgot password? Click here to reset