TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Prediction

04/16/2021
by   Stanislav Beliaev, et al.
0

We propose TalkNet, a non-autoregressive convolutional neural model for speech synthesis with explicit pitch and duration prediction. The model consists of three feed-forward convolutional networks. The first network predicts grapheme durations. An input text is expanded by repeating each symbol according to the predicted duration. The second network predicts pitch value for every mel frame. The third network generates a mel-spectrogram from the expanded text conditioned on predicted pitch. All networks are based on 1D depth-wise separable convolutional architecture. The explicit duration prediction eliminates word skipping and repeating. The quality of the generated speech nearly matches the best auto-regressive models - TalkNet trained on the LJSpeech dataset got MOS4.08. The model has only 13.2M parameters, almost 2x less than the present state-of-the-art text-to-speech models. The non-autoregressive architecture allows for fast training and inference - 422x times faster than real-time. The small model size and fast inference make the TalkNet an attractive candidate for embedded speech synthesis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2020

JDI-T: Jointly trained Duration Informed Transformer for Text-To-Speech without Explicit Alignment

We propose Jointly trained Duration Informed Transformer (JDI-T), a feed...
research
06/11/2020

FastPitch: Parallel Text-to-speech with Pitch Prediction

We present FastPitch, a fully-parallel text-to-speech model based on Fas...
research
08/22/2016

Median-Based Generation of Synthetic Speech Durations using a Non-Parametric Approach

This paper proposes a new approach to duration modelling for statistical...
research
06/28/2022

Expressive, Variable, and Controllable Duration Modelling in TTS

Duration modelling has become an important research problem once more wi...
research
10/06/2021

Hierarchical prosody modeling and control in non-autoregressive parallel neural TTS

Neural text-to-speech (TTS) synthesis can generate speech that is indist...
research
11/12/2020

Hierarchical Prosody Modeling for Non-Autoregressive Speech Synthesis

Prosody modeling is an essential component in modern text-to-speech (TTS...
research
04/12/2017

A Neural Parametric Singing Synthesizer

We present a new model for singing synthesis based on a modified version...

Please sign up or login with your details

Forgot password? Click here to reset