Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation

05/18/2020
by   Po-Han Chi, et al.
0

For self-supervised speech processing, it is crucial to use pretrained models as speech representation extractors. In recent works, increasing the size of the model has been utilized in acoustic model training in order to achieve better performance. In this paper, we propose Audio ALBERT, a lite version of the self-supervised speech representation model. We use the representations with two downstream tasks, speaker identification, and phoneme classification. We show that Audio ALBERT is capable of achieving competitive performance with those huge models in the downstream tasks while utilizing 91% fewer parameters. Moreover, we use some simple probing models to measure how much the information of the speaker and phoneme is encoded in latent representations. In probing experiments, we find that the latent representations encode richer information of both phoneme and speaker than that of the last layer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2021

Conformer-Based Self-Supervised Learning for Non-Speech Audio Tasks

Representation learning from unlabeled data has been of major interest i...
research
02/24/2023

Phone and speaker spatial organization in self-supervised speech representations

Self-supervised representations of speech are currently being widely use...
research
10/16/2022

SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning

We present the SUPERB challenge at SLT 2022, which aims at learning self...
research
07/09/2021

Dropout Regularization for Self-Supervised Learning of Transformer Encoder Speech Representation

Predicting the altered acoustic frames is an effective way of self-super...
research
06/09/2020

Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer

In this paper, we seek to reduce the computation complexity of transform...
research
10/21/2022

Evidence of Vocal Tract Articulation in Self-Supervised Learning of Speech

Recent self-supervised learning (SSL) models have proven to learn rich r...
research
06/01/2023

Exploration on HuBERT with Multiple Resolutions

Hidden-unit BERT (HuBERT) is a widely-used self-supervised learning (SSL...

Please sign up or login with your details

Forgot password? Click here to reset