Intra-Utterance Similarity Preserving Knowledge Distillation for Audio Tagging

09/03/2020
by   Chun-Chieh Chang, et al.
0

Knowledge Distillation (KD) is a popular area of research for reducing the size of large models while still maintaining good performance. The outputs of larger teacher models are used to guide the training of smaller student models. Given the repetitive nature of acoustic events, we propose to leverage this information to regulate the KD training for Audio Tagging. This novel KD method, "Intra-Utterance Similarity Preserving KD" (IUSP), shows promising results for the audio tagging task. It is motivated by the previously published KD method: "Similarity Preserving KD" (SP). However, instead of preserving the pairwise similarities between inputs within a mini-batch, our method preserves the pairwise similarities between the frames of a single input utterance. Our proposed KD method, IUSP, shows consistent improvements over SP across student models of different sizes on the DCASE 2019 Task 5 dataset for audio tagging. There is a 27.1 the baseline relative to SP's improvement of over the baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2020

Triplet Loss for Knowledge Distillation

In recent years, deep learning has spread rapidly, and deeper, larger mo...
research
03/18/2021

Similarity Transfer for Knowledge Distillation

Knowledge distillation is a popular paradigm for learning portable neura...
research
04/11/2020

Inter-Region Affinity Distillation for Road Marking Segmentation

We study the problem of distilling knowledge from a large deep teacher n...
research
09/15/2023

Two-Step Knowledge Distillation for Tiny Speech Enhancement

Tiny, causal models are crucial for embedded audio machine learning appl...
research
08/03/2021

DarkGAN: Exploiting Knowledge Distillation for Comprehensible Audio Synthesis with GANs

Generative Adversarial Networks (GANs) have achieved excellent audio syn...
research
08/23/2023

CED: Consistent ensemble distillation for audio tagging

Augmentation and knowledge distillation (KD) are well-established techni...

Please sign up or login with your details

Forgot password? Click here to reset