Match to Win: Analysing Sequences Lengths for Efficient Self-supervised Learning in Speech and Audio

09/30/2022
by   Yan Gao, et al.
11

Self-supervised learning (SSL) has proven vital in speech and audio-related applications. The paradigm trains a general model on unlabeled data that can later be used to solve specific downstream tasks. This type of model is costly to train as it requires manipulating long input sequences that can only be handled by powerful centralised servers. Surprisingly, despite many attempts to increase training efficiency through model compression, the effects of truncating input sequence lengths to reduce computation have not been studied. In this paper, we provide the first empirical study of SSL pre-training for different specified sequence lengths and link this to various downstream tasks. We find that training on short sequences can dramatically reduce resource costs while retaining a satisfactory performance for all tasks. This simple one-line change would promote the migration of SSL training from data centres to user-end edge devices for more realistic and personalised applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

Don't speak too fast: The impact of data bias on self-supervised speech models

Self-supervised Speech Models (S3Ms) have been proven successful in many...
research
04/25/2021

How Well Self-Supervised Pre-Training Performs with Streaming Data?

The common self-supervised pre-training practice requires collecting mas...
research
10/28/2022

Spectrograms Are Sequences of Patches

Self-supervised pre-training models have been used successfully in sever...
research
03/14/2022

Lead-agnostic Self-supervised Learning for Local and Global Representations of Electrocardiogram

In recent years, self-supervised learning methods have shown significant...
research
06/02/2021

Learning to Rehearse in Long Sequence Memorization

Existing reasoning tasks often have an important assumption that the inp...
research
11/04/2022

Once-for-All Sequence Compression for Self-Supervised Speech Models

The sequence length along the time axis is often the dominant factor of ...
research
02/10/2023

Self-Supervised Learning-Based Cervical Cytology Diagnostics in Low-Data Regime and Low-Resource Setting

Screening Papanicolaou test samples effectively reduces cervical cancer-...

Please sign up or login with your details

Forgot password? Click here to reset