Towards Better Meta-Initialization with Task Augmentation for Kindergarten-aged Speech Recognition

02/24/2022
by   Yunzheng Zhu, et al.
0

Children's automatic speech recognition (ASR) is always difficult due to, in part, the data scarcity problem, especially for kindergarten-aged kids. When data are scarce, the model might overfit to the training data, and hence good starting points for training are essential. Recently, meta-learning was proposed to learn model initialization (MI) for ASR tasks of different languages. This method leads to good performance when the model is adapted to an unseen language. However, MI is vulnerable to overfitting on training tasks (learner overfitting). It is also unknown whether MI generalizes to other low-resource tasks. In this paper, we validate the effectiveness of MI in children's ASR and attempt to alleviate the problem of learner overfitting. To achieve model-agnostic meta-learning (MAML), we regard children's speech at each age as a different task. In terms of learner overfitting, we propose a task-level augmentation method by simulating new ages using frequency warping techniques. Detailed experiments are conducted to show the impact of task augmentation on each age for kindergarten-aged speech. As a result, our approach achieves a relative word error rate (WER) improvement of 51 baseline system with no augmentation or initialization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2019

Meta Learning for End-to-End Low-Resource Speech Recognition

In this paper, we proposed to apply meta learning approach for low-resou...
research
07/23/2023

A meta learning scheme for fast accent domain expansion in Mandarin speech recognition

Spoken languages show significant variation across mandarin and accent. ...
research
12/22/2020

Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition

Low-resource automatic speech recognition (ASR) is challenging, as the l...
research
08/29/2020

Data augmentation using prosody and false starts to recognize non-native children's speech

This paper describes AaltoASR's speech recognition system for the INTERS...
research
06/19/2022

Transfer Learning for Robust Low-Resource Children's Speech ASR with Transformers and Source-Filter Warping

Automatic Speech Recognition (ASR) systems are known to exhibit difficul...
research
06/18/2021

Low Resource German ASR with Untranscribed Data Spoken by Non-native Children – INTERSPEECH 2021 Shared Task SPAPL System

This paper describes the SPAPL system for the INTERSPEECH 2021 Challenge...
research
02/25/2021

Meta-Learning for improving rare word recognition in end-to-end ASR

We propose a new method of generating meaningful embeddings for speech, ...

Please sign up or login with your details

Forgot password? Click here to reset