EMA2S: An End-to-End Multimodal Articulatory-to-Speech System

02/07/2021
by   Yu-Wen Chen, et al.
0

Synthesized speech from articulatory movements can have real-world use for patients with vocal cord disorders, situations requiring silent speech, or in high-noise environments. In this work, we present EMA2S, an end-to-end multimodal articulatory-to-speech system that directly converts articulatory movements to speech signals. We use a neural-network-based vocoder combined with multimodal joint-training, incorporating spectrogram, mel-spectrogram, and deep features. The experimental results confirm that the multimodal approach of EMA2S outperforms the baseline system in terms of both objective evaluation and subjective evaluation metrics. Moreover, results demonstrate that joint mel-spectrogram and deep feature loss training can effectively improve system performance.

READ FULL TEXT
07/20/2021

SVSNet: An End-to-end Speaker Voice Similarity Assessment Model

Neural evaluation metrics derived for numerous speech generation tasks h...
06/16/2022

EPG2S: Speech Generation and Speech Enhancement based on Electropalatography and Audio Signals using Multimodal Learning

Speech generation and enhancement based on articulatory movements facili...
04/09/2019

Exploiting Syntactic Features in a Parsed Tree to Improve End-to-End TTS

The end-to-end TTS, which can predict speech directly from a given seque...
11/23/2020

End-to-end Silent Speech Recognition with Acoustic Sensing

Silent speech interfaces (SSI) has been an exciting area of recent inter...
12/02/2020

Classification of Multimodal Hate Speech – The Winning Solution of Hateful Memes Challenge

Hateful Memes is a new challenge set for multimodal classification, focu...
07/09/2018

Deep Co-Clustering for Unsupervised Audiovisual Learning

The seen birds twitter, the running cars accompany with noise, people ta...
07/08/2022

End-to-End Binaural Speech Synthesis

In this work, we present an end-to-end binaural speech synthesis system ...