Pretext Tasks selection for multitask self-supervised speech representation learning

07/01/2021
by   Salah Zaiem, et al.
2

Through solving pretext tasks, self-supervised learning leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task. In various application domains, including computer vision, natural language processing and audio/speech signal processing, a wide range of features where engineered through decades of research efforts. As it turns out, learning to predict such features has proven to be a particularly relevant pretext task leading to building useful self-supervised representations that prove to be effective for downstream tasks. However, methods and common practices for combining such pretext tasks, where each task targets a different group of features for better performance on the downstream task have not been explored and understood properly. In fact, the process relies almost exclusively on a computationally heavy experimental procedure, which becomes intractable with the increase of the number of pretext tasks. This paper introduces a method to select a group of pretext tasks among a set of candidates. The method we propose estimates properly calibrated weights for the partial losses corresponding to the considered pretext tasks during the self-supervised training process. The experiments conducted on speaker recognition and automatic speech recognition validate our approach, as the groups selected and weighted with our method perform better than classic baselines, thus facilitating the selection and combination of relevant pseudo-labels for self-supervised representation learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2021

Conditional independence for pretext task selection in Self-supervised speech representation learning

Through solving pretext tasks, self-supervised learning (SSL) leverages ...
research
05/21/2022

Self-Supervised Speech Representation Learning: A Review

Although supervised deep learning has revolutionized speech and audio pr...
research
12/20/2022

Exploring Effective Fusion Algorithms for Speech Based Self-Supervised Learning Models

Self-supervised learning (SSL) has achieved great success in various are...
research
04/08/2022

Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning

Contrastive learning enables learning useful audio and speech representa...
research
08/28/2023

Speech Self-Supervised Representations Benchmarking: a Case for Larger Probing Heads

Self-supervised learning (SSL) leverages large datasets of unlabeled spe...
research
07/31/2023

Predicting masked tokens in stochastic locations improves masked image modeling

Self-supervised learning is a promising paradigm in deep learning that e...
research
09/11/2023

Towards generalisable and calibrated synthetic speech detection with self-supervised representations

Generalisation – the ability of a model to perform well on unseen data –...

Please sign up or login with your details

Forgot password? Click here to reset