Enhancing Speaker Diarization with Large Language Models: A Contextual Beam Search Approach

09/11/2023
by   Tae Jin Park, et al.
0

Large language models (LLMs) have shown great promise for capturing contextual information in natural language processing tasks. We propose a novel approach to speaker diarization that incorporates the prowess of LLMs to exploit contextual cues in human dialogues. Our method builds upon an acoustic-based speaker diarization system by adding lexical information from an LLM in the inference stage. We model the multi-modal decoding process probabilistically and perform joint acoustic and lexical beam search to incorporate cues from both modalities: audio and text. Our experiments demonstrate that infusing lexical knowledge from the LLM into an acoustics-only diarization system improves overall speaker-attributed word error rate (SA-WER). The experimental results show that LLMs can provide complementary information to acoustic models for the speaker diarization task via proposed beam search decoding approach showing up to 39.8 improvement from the baseline system. Thus, we substantiate that the proposed technique is able to exploit contextual information that is inaccessible to acoustics-only systems which is represented by speaker embeddings. In addition, these findings point to the potential of using LLMs to improve speaker diarization and other speech processing tasks by capturing semantic and contextual cues.

READ FULL TEXT
research
11/27/2018

Speaker Diarization With Lexical Information

This work presents a novel approach to leverage lexical information for ...
research
05/28/2018

Multimodal Speaker Segmentation and Diarization using Lexical and Acoustic Cues via Sequence to Sequence Neural Networks

While there has been substantial amount of work in speaker diarization r...
research
11/03/2022

Logographic Information Aids Learning Better Representations for Natural Language Inference

Statistical language models conventionally implement representation lear...
research
04/10/2023

Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions

This paper presents our latest investigation on modeling backchannel in ...
research
09/19/2023

Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation

Speaker diarization has gained considerable attention within speech proc...
research
01/09/2022

Medication Error Detection Using Contextual Language Models

Medication errors most commonly occur at the ordering or prescribing sta...
research
10/16/2019

Contextual Joint Factor Acoustic Embeddings

Embedding acoustic information into fixed length representations is of i...

Please sign up or login with your details

Forgot password? Click here to reset