Conditional Deep Hierarchical Variational Autoencoder for Voice Conversion

12/06/2021
by   Kei Akuzawa, et al.
0

Variational autoencoder-based voice conversion (VAE-VC) has the advantage of requiring only pairs of speeches and speaker labels for training. Unlike the majority of the research in VAE-VC which focuses on utilizing auxiliary losses or discretizing latent variables, this paper investigates how an increasing model expressiveness has benefits and impacts on the VAE-VC. Specifically, we first analyze VAE-VC from a rate-distortion perspective, and point out that model expressiveness is significant for VAE-VC because rate and distortion reflect similarity and naturalness of converted speeches. Based on the analysis, we propose a novel VC method using a deep hierarchical VAE, which has high model expressiveness as well as having fast conversion speed thanks to its non-autoregressive decoder. Also, our analysis reveals another problem that similarity can be degraded when the latent variable of VAEs has redundant information. We address the problem by controlling the information contained in the latent variable using β-VAE objective. In the experiment using VCTK corpus, the proposed method achieved mean opinion scores higher than 3.5 on both naturalness and similarity in inter-gender settings, which are higher than the scores of existing autoencoder-based VC methods.

READ FULL TEXT
research
10/15/2020

The NeteaseGames System for Voice Conversion Challenge 2020 with Vector-quantization Variational Autoencoder and WaveNet

This paper presents the description of our submitted system for Voice Co...
research
12/16/2022

Text-to-speech synthesis based on latent variable conversion using diffusion probabilistic model and variational autoencoder

Text-to-speech synthesis (TTS) is a task to convert texts into speech. T...
research
07/30/2020

Quantitative Understanding of VAE by Interpreting ELBO as Rate Distortion Cost of Transform Coding

VAE (Variational autoencoder) estimates the posterior parameters (mean a...
research
09/19/2019

Learning to Conceal: A Deep Learning Based Method for Preserving Privacy and Avoiding Prejudice

In this paper, we introduce a learning model able to conceals personal i...
research
09/14/2018

Unsupervised Abstractive Sentence Summarization using Length Controlled Variational Autoencoder

In this work we present a unsupervised approach to summarize sentences i...
research
10/08/2021

KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms

In this paper, we propose a novel neural network model called KaraSinger...
research
12/07/2022

Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve

Variational autoencoders (VAEs) are powerful tools for learning latent r...

Please sign up or login with your details

Forgot password? Click here to reset