Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis

09/22/2012
by   Ingmar Steiner, et al.
0

The importance of modeling speech articulation for high-quality audiovisual (AV) speech synthesis is widely acknowledged. Nevertheless, while state-of-the-art, data-driven approaches to facial animation can make use of sophisticated motion capture techniques, the animation of the intraoral articulators (viz. the tongue, jaw, and velum) typically makes use of simple rules or viseme morphing, in stark contrast to the otherwise high quality of facial modeling. Using appropriate speech production data could significantly improve the quality of articulatory animation for AV synthesis.

READ FULL TEXT
research
10/28/2018

LPCNet: Improving Neural Speech Synthesis Through Linear Prediction

Neural speech synthesis models have recently demonstrated the ability to...
research
04/20/2022

Exploration strategies for articulatory synthesis of complex syllable onsets

High-quality articulatory speech synthesis has many potential applicatio...
research
03/15/2012

Artimate: an articulatory animation framework for audiovisual speech synthesis

We present a modular framework for articulatory animation synthesis usin...
research
05/31/2022

Text/Speech-Driven Full-Body Animation

Due to the increasing demand in films and games, synthesizing 3D avatar ...
research
05/18/2023

AMII: Adaptive Multimodal Inter-personal and Intra-personal Model for Adapted Behavior Synthesis

Socially Interactive Agents (SIAs) are physical or virtual embodied agen...
research
05/20/2020

Evaluating Features and Metrics for High-Quality Simulation of Early Vocal Learning of Vowels

The way infants use auditory cues to learn to speak despite the acoustic...
research
02/22/2022

Neural Speech Synthesis on a Shoestring: Improving the Efficiency of LPCNet

Neural speech synthesis models can synthesize high quality speech but ty...

Please sign up or login with your details

Forgot password? Click here to reset