Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition

12/22/2020
by   Yubei Xiao, et al.
0

Low-resource automatic speech recognition (ASR) is challenging, as the low-resource target language data cannot well train an ASR model. To solve this issue, meta-learning formulates ASR for each source language into many small ASR tasks and meta-learns a model initialization on all tasks from different source languages to access fast adaptation on unseen target languages. However, for different source languages, the quantity and difficulty vary greatly because of their different data scales and diverse phonological systems, which leads to task-quantity and task-difficulty imbalance issues and thus a failure of multilingual meta-learning ASR (MML-ASR). In this work, we solve this problem by developing a novel adversarial meta sampling (AMS) approach to improve MML-ASR. When sampling tasks in MML-ASR, AMS adaptively determines the task sampling probability for each source language. Specifically, for each source language, if the query loss is large, it means that its tasks are not well sampled to train ASR model in terms of its quantity and difficulty and thus should be sampled more frequently for extra learning. Inspired by this fact, we feed the historical task query loss of all source language domain into a network to learn a task sampling policy for adversarially increasing the current query loss of MML-ASR. Thus, the learnt task sampling policy can master the learning situation of each language and thus predicts good task sampling probability for each language for more effective learning. Finally, experiment results on two multilingual datasets show significant performance improvement when applying our AMS on MML-ASR, and also demonstrate the applicability of AMS to other low-resource speech tasks and transfer learning ASR approaches. Our codes are available at: https://github.com/iamxiaoyubei/AMS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2019

Meta Learning for End-to-End Low-Resource Speech Recognition

In this paper, we proposed to apply meta learning approach for low-resou...
research
05/28/2022

Adaptive Activation Network For Low Resource Multilingual Speech Recognition

Low resource automatic speech recognition (ASR) is a useful but thorny t...
research
02/06/2021

A bandit approach to curriculum generation for automatic speech recognition

The Automated Speech Recognition (ASR) task has been a challenging domai...
research
02/24/2022

Towards Better Meta-Initialization with Task Augmentation for Kindergarten-aged Speech Recognition

Children's automatic speech recognition (ASR) is always difficult due to...
research
02/24/2023

Improving Massively Multilingual ASR With Auxiliary CTC Objectives

Multilingual Automatic Speech Recognition (ASR) models have extended the...
research
07/04/2022

Vietnamese Capitalization and Punctuation Recovery Models

Despite the rise of recent performant methods in Automatic Speech Recogn...
research
07/23/2023

A meta learning scheme for fast accent domain expansion in Mandarin speech recognition

Spoken languages show significant variation across mandarin and accent. ...

Please sign up or login with your details

Forgot password? Click here to reset