Online Deep Metric Learning via Mutual Distillation

03/10/2022
by   Gao-Dong Liu, et al.
0

Deep metric learning aims to transform input data into an embedding space, where similar samples are close while dissimilar samples are far apart from each other. In practice, samples of new categories arrive incrementally, which requires the periodical augmentation of the learned model. The fine-tuning on the new categories usually leads to poor performance on the old, which is known as "catastrophic forgetting". Existing solutions either retrain the model from scratch or require the replay of old samples during the training. In this paper, a complete online deep metric learning framework is proposed based on mutual distillation for both one-task and multi-task scenarios. Different from the teacher-student framework, the proposed approach treats the old and new learning tasks with equal importance. No preference over the old or new knowledge is caused. In addition, a novel virtual feature estimation approach is proposed to recover the features assumed to be extracted by the old models. It allows the distillation between the new and the old models without the replay of old training samples or the holding of old models during the training. A comprehensive study shows the superior performance of our approach with the support of different backbones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2020

Triplet Loss for Knowledge Distillation

In recent years, deep learning has spread rapidly, and deeper, larger mo...
research
06/17/2021

Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay

This paper proposes two novel knowledge transfer techniques for class-in...
research
03/02/2021

Distilling Causal Effect of Data in Class-Incremental Learning

We propose a causal framework to explain the catastrophic forgetting in ...
research
12/14/2020

Multi-Domain Multi-Task Rehearsal for Lifelong Learning

Rehearsal, seeking to remind the model by storing old knowledge in lifel...
research
04/25/2022

Online Deep Learning from Doubly-Streaming Data

This paper investigates a new online learning problem with doubly-stream...
research
02/22/2023

Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks

Multi-Task Learning (MTL) is widely-accepted in Natural Language Process...
research
07/20/2022

Rethinking Few-Shot Class-Incremental Learning with Open-Set Hypothesis in Hyperbolic Geometry

Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learni...

Please sign up or login with your details

Forgot password? Click here to reset