Make-A-Voice: Unified Voice Synthesis With Discrete Representation

05/30/2023
by   Rongjie Huang, et al.
0

Various applications of voice synthesis have been developed independently despite the fact that they generate "voice" as output in common. In addition, the majority of voice synthesis models currently rely on annotated audio data, but it is crucial to scale them to self-supervised datasets in order to effectively capture the wide range of acoustic variations present in human voice, including speaker identity, emotion, and prosody. In this work, we propose Make-A-Voice, a unified framework for synthesizing and manipulating voice signals from discrete representations. Make-A-Voice leverages a "coarse-to-fine" approach to model the human voice, which involves three stages: 1) semantic stage: model high-level transformation between linguistic content and self-supervised semantic tokens, 2) acoustic stage: introduce varying control signals as acoustic conditions for semantic-to-acoustic modeling, and 3) generation stage: synthesize high-fidelity waveforms from acoustic tokens. Make-A-Voice offers notable benefits as a unified voice synthesis framework: 1) Data scalability: the major backbone (i.e., acoustic and generation stage) does not require any annotations, and thus the training data could be scaled up. 2) Controllability and conditioning flexibility: we investigate different conditioning mechanisms and effectively handle three voice synthesis applications, including text-to-speech (TTS), voice conversion (VC), and singing voice synthesis (SVS) by re-synthesizing the discrete voice representations with prompt guidance. Experimental results demonstrate that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models. Audio samples are available at https://Make-A-Voice.github.io

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2022

NANSY++: Unified Voice Synthesis with Neural Analysis and Synthesis

Various applications of voice synthesis have been developed independentl...
research
01/30/2021

Expressive Neural Voice Cloning

Voice cloning is the task of learning to synthesize the voice of an unse...
research
10/15/2018

The Trajectory of Voice Onset Time with Vocal Aging

Vocal aging, a universal process of human aging, can largely affect one'...
research
05/16/2023

SoundStorm: Efficient Parallel Audio Generation

We present SoundStorm, a model for efficient, non-autoregressive audio g...
research
08/24/2023

WavMark: Watermarking for Audio Generation

Recent breakthroughs in zero-shot voice synthesis have enabled imitating...
research
10/07/2021

Transferring Voice Knowledge for Acoustic Event Detection: An Empirical Study

Detection of common events and scenes from audio is useful for extractin...
research
02/18/2021

AudioVisual Speech Synthesis: A brief literature review

This brief literature review studies the problem of audiovisual speech s...

Please sign up or login with your details

Forgot password? Click here to reset