Dropout Regularization for Self-Supervised Learning of Transformer Encoder Speech Representation

07/09/2021
by   Jian Luo, et al.
0

Predicting the altered acoustic frames is an effective way of self-supervised learning for speech representation. However, it is challenging to prevent the pretrained model from overfitting. In this paper, we proposed to introduce two dropout regularization methods into the pretraining of transformer encoder: (1) attention dropout, (2) layer dropout. Both of the two dropout methods encourage the model to utilize global speech information, and avoid just copying local spectrum features when reconstructing the masked frames. We evaluated the proposed methods on phoneme classification and speaker recognition tasks. The experiments demonstrate that our dropout approaches achieve competitive results, and improve the performance of classification accuracy on downstream tasks.

READ FULL TEXT
research
05/18/2020

Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation

For self-supervised speech processing, it is crucial to use pretrained m...
research
07/12/2020

TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech

We introduce a self-supervised speech pre-training method called TERA, w...
research
05/17/2022

Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data

Fully connected deep neural networks (DNN) often include redundant weigh...
research
06/09/2020

Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer

In this paper, we seek to reduce the computation complexity of transform...
research
08/06/2019

Self-Balanced Dropout

Dropout is known as an effective way to reduce overfitting via preventin...
research
05/08/2018

Image Ordinal Classification and Understanding: Grid Dropout with Masking Label

Image ordinal classification refers to predicting a discrete target valu...
research
04/11/2021

UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost

Transformer architecture achieves great success in abundant natural lang...

Please sign up or login with your details

Forgot password? Click here to reset