DeepAI AI Chat
Log In Sign Up

High quality, lightweight and adaptable TTS using LPCNet

by   Zvi Kons, et al.

We present a lightweight adaptable neural TTS system with high quality output. The system is composed of three separate neural network blocks: prosody prediction, acoustic feature prediction and Linear Prediction Coding Net as a neural vocoder. This system can synthesize speech with close to natural quality while running 3 times faster than real-time on a standard CPU. The modular setup of the system allows for simple adaptation to new voices with a small amount of data. We first demonstrate the ability of the system to produce high quality speech when trained on large, high quality datasets. Following that, we demonstrate its adaptability by mimicking unseen voices using 5 to 20 minutes long datasets with lower recording quality. Large scale Mean Opinion Score quality and similarity tests are presented, showing that the system can adapt to unseen voices with quality gap of 0.12 and similarity gap of 3 natural speech for male voices and quality gap of 0.35 and similarity of gap of 9


page 1

page 2

page 3

page 4


LPCNet: Improving Neural Speech Synthesis Through Linear Prediction

Neural speech synthesis models have recently demonstrated the ability to...

High Quality Streaming Speech Synthesis with Low, Sentence-Length-Independent Latency

This paper presents an end-to-end text-to-speech system with low latency...

Variational Auto-Encoder based Mandarin Speech Cloning

Speech cloning technology is becoming more sophisticated thanks to the a...

Multilevel Skeletonization Using Local Separators

In this paper we give a new, efficient algorithm for computing curve ske...

HooliGAN: Robust, High Quality Neural Vocoding

Recent developments in generative models have shown that deep learning c...

Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech

Whilst recent neural text-to-speech (TTS) approaches produce high-qualit...

A Proposal to Study "Is High Quality Data All We Need?"

Even though deep neural models have achieved superhuman performance on m...