Silent Speech and Emotion Recognition from Vocal Tract Shape Dynamics in Real-Time MRI

06/16/2021
by   Laxmi Pandey, et al.
0

Speech sounds of spoken language are obtained by varying configuration of the articulators surrounding the vocal tract. They contain abundant information that can be utilized to better understand the underlying mechanism of human speech production. We propose a novel deep neural network-based learning framework that understands acoustic information in the variable-length sequence of vocal tract shaping during speech production, captured by real-time magnetic resonance imaging (rtMRI), and translate it into text. The proposed framework comprises of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. On the USC-TIMIT corpus, the model achieved a 40.6 compared to the existing models. To the best of our knowledge, this is the first study that demonstrates the recognition of entire spoken sentence based on an individual's articulatory motions captured by rtMRI video. We also performed an analysis of variations in the geometry of articulation in each sub-regions of the vocal tract (i.e., pharyngeal, velar and dorsal, hard palate, labial constriction region) with respect to different emotions and genders. Results suggest that each sub-regions distortion is affected by both emotion and gender.

READ FULL TEXT

page 1

page 4

page 5

research
07/29/2018

Towards Automatic Speech Identification from Vocal Tract Shape Dynamics in Real-time MRI

Vocal tract configurations play a vital role in generating distinguishab...
research
03/03/2022

Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition

Automatic emotion recognition for real-life appli-cations is a challengi...
research
10/13/2020

End-to-end Triplet Loss based Emotion Embedding System for Speech Emotion Recognition

In this paper, an end-to-end neural embedding system based on triplet lo...
research
02/22/2018

Deep Multimodal Learning for Emotion Recognition in Spoken Language

In this paper, we present a novel deep multimodal framework to predict h...
research
09/30/2020

Embedded Emotions – A Data Driven Approach to Learn Transferable Feature Representations from Raw Speech Input for Emotion Recognition

Traditional approaches to automatic emotion recognition are relying on t...
research
03/09/2022

An error correction scheme for improved air-tissue boundary in real-time MRI video for speech production

The best performance in Air-tissue boundary (ATB) segmentation of real-t...
research
11/05/2016

LipNet: End-to-End Sentence-level Lipreading

Lipreading is the task of decoding text from the movement of a speaker's...

Please sign up or login with your details

Forgot password? Click here to reset