Population Based Training for Data Augmentation and Regularization in Speech Recognition

10/08/2020
by   Daniel Haziza, et al.
0

Varying data augmentation policies and regularization over the course of optimization has led to performance improvements over using fixed values. We show that population based training is a useful tool to continuously search those hyperparameters, within a fixed budget. This greatly simplifies the experimental burden and computational cost of finding such optimal schedules. We experiment in speech recognition by optimizing SpecAugment this way, as well as dropout. It compares favorably to a baseline that does not change those hyperparameters over the course of training, with an 8 improvement. We obtain 5.18

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2017

Improved Regularization Techniques for End-to-End Speech Recognition

Regularization is important for end-to-end speech models, since the mode...
research
11/15/2021

Data Augmentation for Speech Recognition in Maltese: A Low-Resource Perspective

Developing speech technologies is a challenge for low-resource languages...
research
10/02/2021

Significance of Data Augmentation for Improving Cleft Lip and Palate Speech Recognition

The automatic recognition of pathological speech, particularly from chil...
research
04/18/2019

SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition

We present SpecAugment, a simple data augmentation method for speech rec...
research
11/12/2020

The CUHK-TUDELFT System for The SLT 2021 Children Speech Recognition Challenge

This technical report describes our submission to the 2021 SLT Children ...
research
01/30/2020

BUT Opensat 2019 Speech Recognition System

The paper describes the BUT Automatic Speech Recognition (ASR) systems s...
research
06/08/2020

The Penalty Imposed by Ablated Data Augmentation

There is a set of data augmentation techniques that ablate parts of the ...

Please sign up or login with your details

Forgot password? Click here to reset