Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training

04/02/2021
by   Wei-Ning Hsu, et al.
0

Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66 easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at https://github.com/pytorch/fairseq.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/22/2020

Self-training and Pre-training are Complementary for Speech Recognition

Self-training and unsupervised pre-training have emerged as effective ap...
07/01/2020

A Survey on Self-supervised Pre-training for Sequential Transfer Learning in Neural Networks

Deep neural networks are typically trained under a supervised learning f...
11/28/2016

Learning Deep Representations Using Convolutional Auto-encoders with Symmetric Skip Connections

Unsupervised pre-training was a critical technique for training deep neu...
01/28/2020

Unsupervised Pre-training of Bidirectional Speech Encoders via Masked Reconstruction

We propose an approach for pre-training speech representations via a mas...
06/08/2016

Addressing Limited Data for Textual Entailment Across Domains

We seek to address the lack of labeled data (and high cost of annotation...
02/13/2022

ET-BERT: A Contextualized Datagram Representation with Pre-training Transformers for Encrypted Traffic Classification

Encrypted traffic classification requires discriminative and robust traf...
10/15/2021

Combining Diverse Feature Priors

To improve model generalization, model designers often restrict the feat...