Understanding the visual speech signal

10/03/2017
by   Helen L Bear, et al.
0

For machines to lipread, or understand speech from lip movement, they decode lip-motions (known as visemes) into the spoken sounds. We investigate the visual speech channel to further our understanding of visemes. This has applications beyond machine lipreading; speech therapists, animators, and psychologists can benefit from this work. We explain the influence of speaker individuality, and demonstrate how one can use visemes to boost lipreading.

READ FULL TEXT
research
04/05/2022

What can predictive speech coders learn from speaker recognizers?

This paper compares the speech coder and speaker recognizer applications...
research
10/24/2022

Weak-Supervised Dysarthria-invariant Features for Spoken Language Understanding using an FHVAE and Adversarial Training

The scarcity of training data and the large speaker variation in dysarth...
research
09/09/2022

Reconstructing the Dynamic Directivity of Unconstrained Speech

An accurate model of natural speech directivity is an important step tow...
research
06/12/2020

"Notic My Speech" – Blending Speech Patterns With Multimedia

Speech as a natural signal is composed of three parts - visemes (visual ...
research
10/03/2017

Visual gesture variability between talkers in continuous visual speech

Recent adoption of deep learning methods to the field of machine lipread...
research
05/01/2021

It's not what you said, it's how you said it: discriminative perception of speech as a multichannel communication system

People convey information extremely effectively through spoken interacti...
research
10/28/2021

E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches

What makes speeches effective has long been a subject for debate, and un...

Please sign up or login with your details

Forgot password? Click here to reset