Neural Speech Synthesis on a Shoestring: Improving the Efficiency of LPCNet

02/22/2022
by   Jean-Marc Valin, et al.
0

Neural speech synthesis models can synthesize high quality speech but typically require a high computational complexity to do so. In previous work, we introduced LPCNet, which uses linear prediction to significantly reduce the complexity of neural synthesis. In this work, we further improve the efficiency of LPCNet – targeting both algorithmic and computational improvements – to make it usable on a wide variety of devices. We demonstrate an improvement in synthesis quality while operating 2.5x faster. The resulting open-source LPCNet algorithm can perform real-time neural synthesis on most existing phones and is even usable in some embedded devices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2018

LPCNet: Improving Neural Speech Synthesis Through Linear Prediction

Neural speech synthesis models have recently demonstrated the ability to...
research
09/17/2021

On-device neural speech synthesis

Recent advances in text-to-speech (TTS) synthesis, such as Tacotron and ...
research
02/23/2022

End-to-end LPCNet: A Neural Vocoder With Fully-Differentiable LPC Estimation

Neural vocoders have recently demonstrated high quality speech synthesis...
research
03/28/2019

A Real-Time Wideband Neural Vocoder at 1.6 kb/s Using LPCNet

Neural speech synthesis algorithms are a promising new approach for codi...
research
09/22/2012

Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis

The importance of modeling speech articulation for high-quality audiovis...
research
05/02/2019

High quality, lightweight and adaptable TTS using LPCNet

We present a lightweight adaptable neural TTS system with high quality o...
research
02/01/2021

Universal Neural Vocoding with Parallel WaveNet

We present a universal neural vocoder based on Parallel WaveNet, with an...

Please sign up or login with your details

Forgot password? Click here to reset