Tagged-MRI Sequence to Audio Synthesis via Self Residual Attention Guided Heterogeneous Translator

06/05/2022
by   Xiaofeng Liu, et al.
1

Understanding the underlying relationship between tongue and oropharyngeal muscle deformation seen in tagged-MRI and intelligible speech plays an important role in advancing speech motor control theories and treatment of speech related-disorders. Because of their heterogeneous representations, however, direct mapping between the two modalities – i.e., two-dimensional (mid-sagittal slice) plus time tagged-MRI sequence and its corresponding one-dimensional waveform – is not straightforward. Instead, we resort to two-dimensional spectrograms as an intermediate representation, which contains both pitch and resonance, from which to develop an end-to-end deep learning framework to translate from a sequence of tagged-MRI to its corresponding audio waveform with limited dataset size. Our framework is based on a novel fully convolutional asymmetry translator with guidance of a self residual attention strategy to specifically exploit the moving muscular structures during speech. In addition, we leverage a pairwise correlation of the samples with the same utterances with a latent space representation disentanglement strategy. Furthermore, we incorporate an adversarial training approach with generative adversarial networks to offer improved realism on our generated spectrograms. Our experimental results, carried out with a total of 63 tagged-MRI sequences alongside speech acoustics, showed that our framework enabled the generation of clear audio waveforms from a sequence of tagged-MRI, surpassing competing methods. Thus, our framework provides the great potential to help better understand the relationship between the two modalities.

READ FULL TEXT

page 3

page 6

research
02/14/2023

Synthesizing audio from tongue motion during speech using tagged MRI via transformer

Investigating the relationship between internal tissue point motion of t...
research
04/27/2021

End-to-End Video-To-Speech Synthesis using Generative Adversarial Networks

Video-to-speech is the process of reconstructing the audio speech from a...
research
04/13/2020

From Inference to Generation: End-to-end Fully Self-supervised Generation of Human Face from Speech

This work seeks the possibility of generating the human face from voice ...
research
03/21/2022

Differentiable Duration Modeling for End-to-End Text-to-Speech

Parallel text-to-speech (TTS) models have recently enabled fast and high...
research
08/31/2018

Whispered-to-voiced Alaryngeal Speech Conversion with Generative Adversarial Networks

Most methods of voice restoration for patients suffering from aphonia ei...
research
10/06/2021

GANtron: Emotional Speech Synthesis with Generative Adversarial Networks

Speech synthesis is used in a wide variety of industries. Nonetheless, i...

Please sign up or login with your details

Forgot password? Click here to reset