DeepAI AI Chat
Log In Sign Up

Speaker Generation

11/07/2021
by   Daisy Stanton, et al.
0

This work explores the task of synthesizing speech in nonexistent human-sounding voices. We call this task "speaker generation", and present TacoSpawn, a system that performs competitively at this task. TacoSpawn is a recurrent attention-based text-to-speech model that learns a distribution over a speaker embedding space, which enables sampling of novel and diverse speakers. Our method is easy to implement, and does not require transfer learning from speaker ID systems. We present objective and subjective metrics for evaluating performance on this task, and demonstrate that our proposed objective metrics correlate with human perception of speaker similarity. Audio samples are available on our demo page.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/24/2018

Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron

We present an extension to the Tacotron speech synthesis architecture th...
06/21/2022

Human-in-the-loop Speaker Adaptation for DNN-based Multi-speaker TTS

This paper proposes a human-in-the-loop speaker-adaptation method for mu...
02/20/2018

Fitting New Speakers Based on a Short Untranscribed Sample

Learning-based Text To Speech systems have the potential to generalize f...
06/25/2022

Synthesizing Personalized Non-speech Vocalization from Discrete Speech Representations

We formulated non-speech vocalization (NSV) modeling as a text-to-speech...
03/05/2022

Language vs Speaker Change: A Comparative Study

Spoken language change detection (LCD) refers to detecting language swit...
09/18/2019

RTTD-ID: Tracked Captions with Multiple Speakers for Deaf Students

Students who are deaf and hard of hearing cannot hear in class and do no...