Comparison of Self-Supervised Speech Pre-Training Methods on Flemish Dutch

09/29/2021
by   Jakob Poncelet, et al.
0

Recent research in speech processing exhibits a growing interest in unsupervised and self-supervised representation learning from unlabelled data to alleviate the need for large amounts of annotated data. We investigate several popular pre-training methods and apply them to Flemish Dutch. We compare off-the-shelf English pre-trained models to models trained on an increasing amount of Flemish data. We find that the most important factors for positive transfer to downstream speech recognition tasks include a substantial amount of data and a matching pre-training domain. Ideally, we also finetune on an annotated subset in the target language. All pre-trained models improve linear phone separability in Flemish, but not all methods improve Automatic Speech Recognition. We experience superior performance with wav2vec 2.0 and we obtain a 30 XLSR-53 model on Flemish Dutch, after integration into an HMM-DNN acoustic model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2022

Measuring the Impact of Individual Domain Factors in Self-Supervised Pre-Training

Human speech data comprises a rich set of domain factors such as accent,...
research
07/30/2023

Mispronunciation detection using self-supervised speech representations

In recent years, self-supervised learning (SSL) models have produced pro...
research
10/29/2020

Self-supervised Pre-training Reduces Label Permutation Instability of Speech Separation

Speech separation has been well-developed while there are still problems...
research
06/24/2022

Predicting within and across language phoneme recognition performance of self-supervised learning speech pre-trained models

In this work, we analyzed and compared speech representations extracted ...
research
01/20/2023

Self-Supervised Learning for Data Scarcity in a Fatigue Damage Prognostic Problem

With the increasing availability of data for Prognostics and Health Mana...
research
03/30/2021

Pre-training strategies and datasets for facial representation learning

What is the best way to learn a universal face representation? Recent wo...
research
05/20/2020

A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition

Building a good speech recognition system usually requires large amounts...

Please sign up or login with your details

Forgot password? Click here to reset