Teacher Agent: A Non-Knowledge Distillation Method for Rehearsal-based Video Incremental Learning

06/01/2023
by   Shengqin Jiang, et al.
0

With the rise in popularity of video-based social media, new categories of videos are constantly being generated, creating an urgent need for robust incremental learning techniques for video understanding. One of the biggest challenges in this task is catastrophic forgetting, where the network tends to forget previously learned data while learning new categories. To overcome this issue, knowledge distillation is a widely used technique for rehearsal-based video incremental learning that involves transferring important information on similarities among different categories to enhance the student model. Therefore, it is preferable to have a strong teacher model to guide the students. However, the limited performance of the network itself and the occurrence of catastrophic forgetting can result in the teacher network making inaccurate predictions for some memory exemplars, ultimately limiting the student network's performance. Based on these observations, we propose a teacher agent capable of generating stable and accurate soft labels to replace the output of the teacher model. This method circumvents the problem of knowledge misleading caused by inaccurate predictions of the teacher model and avoids the computational overhead of loading the teacher model for knowledge distillation. Extensive experiments demonstrate the advantages of our method, yielding significant performance improvements while utilizing only half the resolution of video clips in the incremental phases as input compared to recent state-of-the-art methods. Moreover, our method surpasses the performance of joint training when employing four times the number of samples in episodic memory.

READ FULL TEXT

page 1

page 2

page 4

research
02/23/2022

Multi-Teacher Knowledge Distillation for Incremental Implicitly-Refined Classification

Incremental learning methods can learn new classes continually by distil...
research
12/21/2022

Incremental Learning for Neural Radiance Field with Uncertainty-Filtered Knowledge Distillation

Recent neural radiance field (NeRF) representation has achieved great su...
research
08/18/2023

Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning

In this work, we investigate exemplar-free class incremental learning (C...
research
09/01/2022

An Incremental Learning framework for Large-scale CTR Prediction

In this work we introduce an incremental learning framework for Click-Th...
research
06/03/2019

Random Path Selection for Incremental Learning

Incremental life-long learning is a main challenge towards the long-stan...
research
08/11/2021

Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data

With the increasing popularity of deep learning on edge devices, compres...
research
03/09/2022

How many Observations are Enough? Knowledge Distillation for Trajectory Forecasting

Accurate prediction of future human positions is an essential task for m...

Please sign up or login with your details

Forgot password? Click here to reset