Attention and Encoder-Decoder based models for transforming articulatory movements at different speaking rates

06/04/2020
by   Abhayjeet Singh, et al.
0

While speaking at different rates, articulators (like tongue, lips) tend to move differently and the enunciations are also of different durations. In the past, affine transformation and DNN have been used to transform articulatory movements from neutral to fast(N2F) and neutral to slow(N2S) speaking rates [1]. In this work, we improve over the existing transformation techniques by modeling rate specific durations and their transformation using AstNet, an encoder-decoder framework with attention. In the current work, we propose an encoder-decoder architecture using LSTMs which generates smoother predicted articulatory trajectories. For modeling duration variations across speaking rates, we deploy attention network, which eliminates the needto align trajectories in different rates using DTW. We performa phoneme specific duration analysis to examine how well duration is transformed using the proposed AstNet. As the range of articulatory motions is correlated with speaking rate, we also analyze amplitude of the transformed articulatory movements at different rates compared to their original counterparts, to examine how well the proposed AstNet predicts the extent of articulatory movements in N2F and N2S. We observe that AstNet could model both duration and extent of articulatory movements better than the existing transformation techniques resulting in more accurate transformed articulatory trajectories.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2016

Jointly Learning to Align and Convert Graphemes to Phonemes with Neural Attention Models

We propose an attention-enabled encoder-decoder model for the problem of...
research
04/23/2020

ByteSing: A Chinese Singing Voice Synthesis System Using Duration Allocated Encoder-Decoder Acoustic Models and WaveRNN Vocoders

This paper presents ByteSing, a Chinese singing voice synthesis (SVS) sy...
research
10/09/2021

PAMA-TTS: Progression-Aware Monotonic Attention for Stable Seq2Seq TTS With Accurate Phoneme Duration Control

Sequence expansion between encoder and decoder is a critical challenge i...
research
12/15/2022

Attention as a guide for Simultaneous Speech Translation

The study of the attention mechanism has sparked interest in many fields...
research
04/08/2020

Multi-Head Attention-based Probabilistic Vehicle Trajectory Prediction

This paper presents online-capable deep learning model for probabilistic...
research
10/31/2019

A comparative study of estimating articulatory movements from phoneme sequences and acoustic features

Unlike phoneme sequences, movements of speech articulators (lips, tongue...

Please sign up or login with your details

Forgot password? Click here to reset