On Compressing Sequences for Self-Supervised Speech Models

10/13/2022
by   Yen Meng, et al.
0

Compressing self-supervised models has become increasingly necessary, as self-supervised models become larger. While previous approaches have primarily focused on compressing the model size, shortening sequences is also effective in reducing the computational cost. In this work, we study fixed-length and variable-length subsampling along the time axis in self-supervised learning. We explore how individual downstream tasks are sensitive to input frame rates. Subsampling while training self-supervised models not only improves the overall performance on downstream tasks under certain frame rates, but also brings significant speed-up in inference. Variable-length subsampling performs particularly well under low frame rates. In addition, if we have access to phonetic boundaries, we find no degradation in performance for an average frame rate as low as 10 Hz.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2022

Once-for-All Sequence Compression for Self-Supervised Speech Models

The sequence length along the time axis is often the dominant factor of ...
research
09/11/2023

Towards generalisable and calibrated synthetic speech detection with self-supervised representations

Generalisation – the ability of a model to perform well on unseen data –...
research
12/30/2022

An Analysis of Attention via the Lens of Exchangeability and Latent Variable Models

With the attention mechanism, transformers achieve significant empirical...
research
06/02/2023

The Influence of Variable Frame Timing on First-Person Gaming

Variable frame timing (VFT), or changes in the time intervals between di...
research
10/13/2022

On the Utility of Self-supervised Models for Prosody-related Tasks

Self-Supervised Learning (SSL) from speech data has produced models that...
research
02/18/2022

Masked prediction tasks: a parameter identifiability view

The vast majority of work in self-supervised learning, both theoretical ...
research
10/20/2022

Towards Sustainable Self-supervised Learning

Although increasingly training-expensive, most self-supervised learning ...

Please sign up or login with your details

Forgot password? Click here to reset