Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge

05/24/2020
by   Andros Tjandra, et al.
0

In this paper, we report our submitted system for the ZeroSpeech 2020 challenge on Track 2019. The main theme in this challenge is to build a speech synthesizer without any textual information or phonetic labels. In order to tackle those challenges, we build a system that must address two major components such as 1) given speech audio, extract subword units in an unsupervised way and 2) re-synthesize the audio from novel speakers. The system also needs to balance the codebook performance between the ABX error rate and the bitrate compression rate. Our main contribution here is we proposed Transformer-based VQ-VAE for unsupervised unit discovery and Transformer-based inverter for the speech synthesis given the extracted codebook. Additionally, we also explored several regularization methods to improve performance even further.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2019

VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019

We describe our submitted system for the ZeroSpeech Challenge 2019. The ...
research
04/25/2019

The Zero Resource Speech Challenge 2019: TTS without T

We present the Zero Resource Speech Challenge 2019, which proposes to bu...
research
04/06/2022

Simple and Effective Unsupervised Speech Synthesis

We introduce the first unsupervised speech synthesis system based on a s...
research
05/19/2020

Bayesian Subspace HMM for the Zerospeech 2020 Challenge

In this paper we describe our submission to the Zerospeech 2020 challeng...
research
10/12/2020

The Zero Resource Speech Challenge 2020: Discovering discrete subword and word units

We present the Zero Resource Speech Challenge 2020, which aims at learni...
research
10/24/2020

A Comparison of Discrete Latent Variable Models for Speech Representation Learning

Neural latent variable models enable the discovery of interesting struct...
research
01/02/2021

What all do audio transformer models hear? Probing Acoustic Representations for Language Delivery and its Structure

In recent times, BERT based transformer models have become an inseparabl...

Please sign up or login with your details

Forgot password? Click here to reset