Unsupervised Adaptation with Interpretable Disentangled Representations for Distant Conversational Speech Recognition

06/13/2018
by   Wei-Ning Hsu, et al.
0

The current trend in automatic speech recognition is to leverage large amounts of labeled data to train supervised neural network models. Unfortunately, obtaining data for a wide range of domains to train robust models can be costly. However, it is relatively inexpensive to collect large amounts of unlabeled data from domains that we want the models to generalize to. In this paper, we propose a novel unsupervised adaptation method that learns to synthesize labeled data for the target domain from unlabeled in-domain data and labeled out-of-domain data. We first learn without supervision an interpretable latent representation of speech that encodes linguistic and nuisance factors (e.g., speaker and channel) using different latent variables. To transform a labeled out-of-domain utterance without altering its transcript, we transform the latent nuisance variables while maintaining the linguistic variables. To demonstrate our approach, we focus on a channel mismatch setting, where the domain of interest is distant conversational speech, and labels are only available for close-talking speech. Our proposed method is evaluated on the AMI dataset, outperforming all baselines and bridging the gap between unadapted and in-domain models by over 77

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2017

Unsupervised Domain Adaptation for Robust Speech Recognition via Variational Autoencoder-Based Data Augmentation

Domain mismatch between training and testing can lead to significant deg...
research
11/20/2015

Unsupervised Adaptation of SPLDA

State-of-the-art speaker recognition relays on models that need a large ...
research
04/01/2019

Adaptation of Hierarchical Structured Models for Speech Act Recognition in Asynchronous Conversation

We address the problem of speech act recognition (SAR) in asynchronous c...
research
09/22/2017

Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data

We present a factorized hierarchical variational autoencoder, which lear...
research
09/12/2018

Unsupervised Representation Learning of Speech for Dialect Identification

In this paper, we explore the use of a factorized hierarchical variation...
research
06/13/2018

A Study of Enhancement, Augmentation, and Autoencoder Methods for Domain Adaptation in Distant Speech Recognition

Speech recognizers trained on close-talking speech do not generalize to ...
research
04/02/2019

Lessons from Building Acoustic Models with a Million Hours of Speech

This is a report of our lessons learned building acoustic models from 1 ...

Please sign up or login with your details

Forgot password? Click here to reset