How to Design a Three-Stage Architecture for Audio-Visual Active Speaker Detection in the Wild

06/07/2021 ∙ by Okan Köpüklü, et al. ∙ 0

Successful active speaker detection requires a three-stage pipeline: (i) audio-visual encoding for all speakers in the clip, (ii) inter-speaker relation modeling between a reference speaker and the background speakers within each frame, and (iii) temporal modeling for the reference speaker. Each stage of this pipeline plays an important role for the final performance of the created architecture. Based on a series of controlled experiments, this work presents several practical guidelines for audio-visual active speaker detection. Correspondingly, we present a new architecture called ASDNet, which achieves a new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 93.5 outperforming the second best with a large margin of 4.7 pretrained models are publicly available.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

Code Repositories

ASDNet

Audio-Visual Active Speaker Detection with PyTorch on AVA-ActiveSpeaker dataset


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.