End-to-End Speaker Diarization Conditioned on Speech Activity and Overlap Detection

06/08/2021
by   Yuki Takashima, et al.
0

In this paper, we present a conditional multitask learning method for end-to-end neural speaker diarization (EEND). The EEND system has shown promising performance compared with traditional clustering-based methods, especially in the case of overlapping speech. In this paper, to further improve the performance of the EEND system, we propose a novel multitask learning framework that solves speaker diarization and a desired subtask while explicitly considering the task dependency. We optimize speaker diarization conditioned on speech activity and overlap detection that are subtasks of speaker diarization, based on the probabilistic chain rule. Experimental results show that our proposed method can leverage a subtask to effectively model speaker diarization, and outperforms conventional EEND systems in terms of diarization error rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2020

Neural Speaker Diarization with Speaker-Wise Chain Rule

Speaker diarization is an essential step for processing multi-speaker au...
research
10/25/2019

Overlap-aware diarization: resegmentation using neural end-to-end overlapped speech detection

We address the problem of effectively handling overlapping speech in a d...
research
05/28/2021

DIVE: End-to-end Speech Diarization via Iterative Speaker Embedding

We introduce DIVE, an end-to-end speaker diarization algorithm. Our neur...
research
09/12/2019

End-to-End Neural Speaker Diarization with Permutation-Free Objectives

In this paper, we propose a novel end-to-end neural-network-based speake...
research
05/29/2023

An Experimental Review of Speaker Diarization methods with application to Two-Speaker Conversational Telephone Speech recordings

We performed an experimental review of current diarization systems for t...
research
06/04/2020

Online End-to-End Neural Diarization with Speaker-Tracing Buffer

End-to-end speaker diarization using a fully supervised self-attention m...
research
02/23/2020

DIHARD II is Still Hard: Experimental Results and Discussions from the DKU-LENOVO Team

In this paper, we present the submitted system for the second DIHARD Spe...

Please sign up or login with your details

Forgot password? Click here to reset