Unsupervised Learning of Disentangled Speech Content and Style Representation

by   Andros Tjandra, et al.

We present an approach for unsupervised learning of speech representation disentangling contents and styles. Our model consists of: (1) a local encoder that captures per-frame information; (2) a global encoder that captures per-utterance information; and (3) a conditional decoder that reconstructs speech given local and global latent variables. Our experiments show that (1) the local latent variables encode speech contents, as reconstructed speech can be recognized by ASR with low word error rates (WER), even with a different global encoding; (2) the global latent variables encode speaker style, as reconstructed speech shares speaker identity with the source utterance of the global encoding. Additionally, we demonstrate an useful application from our pre-trained model, where we can train a speaker recognition model from the global latent variables and achieve high accuracy by fine-tuning with as few data as one label per speaker.


page 1

page 2

page 3

page 4


Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data

We present a factorized hierarchical variational autoencoder, which lear...

Unsupervised Quantized Prosody Representation for Controllable Speech Synthesis

In this paper, we propose a novel prosody disentangle method for prosodi...

Mixture factorized auto-encoder for unsupervised hierarchical deep factorization of speech signal

Speech signal is constituted and contributed by various informative fact...

Unsupervised End-to-End Learning of Discrete Linguistic Units for Voice Conversion

We present an unsupervised end-to-end training scheme where we discover ...

Unsupervised Speech Domain Adaptation Based on Disentangled Representation Learning for Robust Speech Recognition

In general, the performance of automatic speech recognition (ASR) system...

Towards Robust Unsupervised Disentanglement of Sequential Data – A Case Study Using Music Audio

Disentangled sequential autoencoders (DSAEs) represent a class of probab...

Unsupervised Learning of Sequence Representations by Autoencoders

Traditional machine learning models have problems with handling sequence...