Multi-Representation Knowledge Distillation For Audio Classification

02/22/2020
by   Liang Gao, et al.
0

As an important component of multimedia analysis tasks, audio classification aims to discriminate between different audio signal types and has received intensive attention due to its wide applications. Generally speaking, the raw signal can be transformed into various representations (such as Short Time Fourier Transform and Mel Frequency Cepstral Coefficients), and information implied in different representations can be complementary. Ensembling the models trained on different representations can greatly boost the classification performance, however, making inference using a large number of models is cumbersome and computationally expensive. In this paper, we propose a novel end-to-end collaborative learning framework for the audio classification task. The framework takes multiple representations as the input to train the models in parallel. The complementary information provided by different representations is shared by knowledge distillation. Consequently, the performance of each model can be significantly promoted without increasing the computational overhead in the inference stage. Extensive experimental results demonstrate that the proposed approach can improve the classification performance and achieve state-of-the-art results on both acoustic scene classification tasks and general audio tagging tasks.

READ FULL TEXT

page 1

page 4

page 8

research
10/27/2021

Temporal Knowledge Distillation for On-device Audio Classification

Improving the performance of on-device audio classification models remai...
research
04/08/2019

Audio Classification of Bit-Representation Waveform

This paper investigates waveform representation for audio signal classif...
research
06/30/2023

Audio Embeddings as Teachers for Music Classification

Music classification has been one of the most popular tasks in the field...
research
03/14/2023

Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation Towards General Sound Classification

Data-Free Knowledge Distillation (DFKD) has recently attracted growing a...
research
04/25/2022

End-to-End Audio Strikes Back: Boosting Augmentations Towards An Efficient Audio Classification Network

While efficient architectures and a plethora of augmentations for end-to...
research
07/12/2022

EfficientLEAF: A Faster LEarnable Audio Frontend of Questionable Use

In audio classification, differentiable auditory filterbanks with few pa...
research
03/03/2023

Low-Complexity Audio Embedding Extractors

Solving tasks such as speaker recognition, music classification, or sema...

Please sign up or login with your details

Forgot password? Click here to reset