EdiTTS: Score-based Editing for Controllable Text-to-Speech

10/06/2021
by   Jaesung Tae, et al.
4

We present EdiTTS, an off-the-shelf speech editing methodology based on score-based generative modeling for text-to-speech synthesis. EdiTTS allows for targeted, granular editing of audio, both in terms of content and pitch, without the need for any additional training, task-specific optimization, or architectural modifications to the score-based model backbone. Specifically, we apply coarse yet deliberate perturbations in the Gaussian prior space to induce desired behavior from the diffusion model, while applying masks and softening kernels to ensure that iterative edits are applied only to the target region. Listening tests demonstrate that EdiTTS is capable of reliably generating natural-sounding audio that satisfies user-imposed requirements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2021

Neural Pitch-Shifting and Time-Stretching with Controllable LPCNet

Modifying the pitch and timing of an audio signal are fundamental audio ...
research
06/19/2023

Instruct-NeuralTalker: Editing Audio-Driven Talking Radiance Fields with Instructions

Recent neural talking radiance field methods have shown great success in...
research
09/21/2023

FluentEditor: Text-based Speech Editing by Considering Acoustic and Prosody Consistency

Text-based speech editing (TSE) techniques are designed to enable users ...
research
05/23/2023

FluentSpeech: Stutter-Oriented Automatic Speech Editing with Context-Aware Diffusion Models

Stutter removal is an essential scenario in the field of speech editing....
research
01/10/2023

Speech Driven Video Editing via an Audio-Conditioned Diffusion Model

In this paper we propose a method for end-to-end speech driven video edi...
research
05/06/2023

AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion

Recent advances in diffusion models have showcased promising results in ...

Please sign up or login with your details

Forgot password? Click here to reset