Two-Stream Joint-Training for Speaker Independent Acoustic-to-Articulatory Inversion

02/26/2023
by   Jianrong Wang, et al.
0

Acoustic-to-articulatory inversion (AAI) aims to estimate the parameters of articulators from speech audio. There are two common challenges in AAI, which are the limited data and the unsatisfactory performance in speaker independent scenario. Most current works focus on extracting features directly from speech and ignoring the importance of phoneme information which may limit the performance of AAI. To this end, we propose a novel network called SPN that uses two different streams to carry out the AAI task. Firstly, to improve the performance of speaker-independent experiment, we propose a new phoneme stream network to estimate the articulatory parameters as the phoneme features. To the best of our knowledge, this is the first work that extracts the speaker-independent features from phonemes to improve the performance of AAI. Secondly, in order to better represent the speech information, we train a speech stream network to combine the local features and the global features. Compared with state-of-the-art (SOTA), the proposed method reduces 0.18mm on RMSE and increases 6.0 speaker-independent experiment. The code has been released at https://github.com/liujinyu123/AAINetwork-SPN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2022

Acoustic-to-articulatory Inversion based on Speech Decomposition and Auxiliary Feature

Acoustic-to-articulatory inversion (AAI) is to obtain the movement of ar...
research
02/14/2023

Speaker-Independent Acoustic-to-Articulatory Speech Inversion

To build speech processing methods that can handle speech as naturally a...
research
03/14/2019

Audiovisual Speaker Tracking using Nonlinear Dynamical Systems with Dynamic Stream Weights

Data fusion plays an important role in many technical applications that ...
research
06/07/2023

Multi-microphone Automatic Speech Segmentation in Meetings Based on Circular Harmonics Features

Speaker diarization is the task of answering Who spoke and when? in an a...
research
12/06/2018

Pitch-synchronous DCT features: A pilot study on speaker identification

We propose a new feature, namely, pitchsynchronous discrete cosine trans...
research
10/13/2020

Three-Dimensional Lip Motion Network for Text-Independent Speaker Recognition

Lip motion reflects behavior characteristics of speakers, and thus can b...
research
10/29/2022

The Secret Source : Incorporating Source Features to Improve Acoustic-to-Articulatory Speech Inversion

In this work, we incorporated acoustically derived source features, aper...

Please sign up or login with your details

Forgot password? Click here to reset