DeepAI AI Chat
Log In Sign Up

An empirical study of weakly supervised audio tagging embeddings for general audio representations

by   Heinrich Dinkel, et al.

We study the usability of pre-trained weakly supervised audio tagging (AT) models as feature extractors for general audio representations. We mainly analyze the feasibility of transferring those embeddings to other tasks within the speech and sound domains. Specifically, we benchmark weakly supervised pre-trained models (MobileNetV2 and EfficientNet-B0) against modern self-supervised learning methods (BYOL-A) as feature extractors. Fourteen downstream tasks are used for evaluation ranging from music instrument classification to language classification. Our results indicate that AT pre-trained models are an excellent transfer learning choice for music, event, and emotion recognition tasks. Further, finetuning AT models can also benefit speech-related tasks such as keyword spotting and intent classification.


page 1

page 2

page 3

page 4


Improving Self-Supervised Learning for Audio Representations by Feature Diversity and Decorrelation

Self-supervised learning (SSL) has recently shown remarkable results in ...

Codified audio language modeling learns useful representations for music information retrieval

We demonstrate that language models pre-trained on codified (discretely-...

Learning Music Representations with wav2vec 2.0

Learning music representations that are general-purpose offers the flexi...

Towards Learning Universal Audio Representations

The ability to learn universal audio representations that can solve dive...

BYOL for Audio: Exploring Pre-trained General-purpose Audio Representations

Pre-trained models are essential as feature extractors in modern machine...

Deep Embeddings for Robust User-Based Amateur Vocal Percussion Classification

Vocal Percussion Transcription (VPT) is concerned with the automatic det...

Low-Complexity Audio Embedding Extractors

Solving tasks such as speaker recognition, music classification, or sema...