DeepAI AI Chat
Log In Sign Up

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

by   Disong Wang, et al.
HUAWEI Technologies Co., Ltd.
The Chinese University of Hong Kong

One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at


Speech Representation Disentanglement with Adversarial Mutual Information Learning for One-shot Voice Conversion

One-shot voice conversion (VC) with only a single target speaker's speec...

AVQVC: One-shot Voice Conversion by Vector Quantization with applying contrastive learning

Voice Conversion(VC) refers to changing the timbre of a speech while ret...

Learning Speaker Representations with Mutual Information

Learning good representations is of crucial importance in deep learning....

Computing with Hypervectors for Efficient Speaker Identification

We introduce a method to identify speakers by computing with high-dimens...

Preliminary study on using vector quantization latent spaces for TTS/VC systems with consistent performance

Generally speaking, the main objective when training a neural speech syn...

TriAAN-VC: Triple Adaptive Attention Normalization for Any-to-Any Voice Conversion

Voice Conversion (VC) must be achieved while maintaining the content of ...

Exploring Disentanglement with Multilingual and Monolingual VQ-VAE

This work examines the content and usefulness of disentangled phone and ...

Code Repositories


Official implementation of VQMIVC: One-shot (any-to-any) Voice Conversion @ Interspeech 2021

view repo