Putting a Face to the Voice: Fusing Audio and Visual Signals Across a Video to Determine Speakers

05/31/2017
by   Ken Hoover, et al.
0

In this paper, we present a system that associates faces with voices in a video by fusing information from the audio and visual signals. The thesis underlying our work is that an extremely simple approach to generating (weak) speech clusters can be combined with visual signals to effectively associate faces and voices by aggregating statistics across a video. This approach does not need any training data specific to this task and leverages the natural coherence of information in the audio and visual streams. It is particularly applicable to tracking speakers in videos on the web where a priori information about the environment (e.g., number of speakers, spatial signals for beamforming) is not available. We performed experiments on a real-world dataset using this analysis framework to determine the speaker in a video. Given a ground truth labeling determined by human rater consensus, our approach had 71

READ FULL TEXT

page 2

page 4

research
04/10/2018

Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation

We present a joint audio-visual model for isolating a single speech sign...
research
06/24/2019

Who said that?: Audio-visual speaker diarisation of real-world meetings

The goal of this work is to determine 'who spoke when' in real-world mee...
research
04/11/2018

The Conversation: Deep Audio-Visual Speech Enhancement

Our goal is to isolate individual speakers from multi-talker simultaneou...
research
06/14/2019

Video-Driven Speech Reconstruction using Generative Adversarial Networks

Speech is a means of communication which relies on both audio and visual...
research
01/13/2020

Unsupervised Any-to-Many Audiovisual Synthesis via Exemplar Autoencoders

We present an unsupervised approach that enables us to convert the speec...
research
04/01/2022

End-to-end multi-talker audio-visual ASR using an active speaker attention module

This paper presents a new approach for end-to-end audio-visual multi-tal...
research
10/03/2017

Decoding visemes: improving machine lipreading (PhD thesis)

Machine lipreading (MLR) is speech recognition from visual cues and a ni...

Please sign up or login with your details

Forgot password? Click here to reset