MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding

06/03/2021
by   Jia-Chen Gu, et al.
0

Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction. However, these existing methods on MPC usually represent interlocutors and utterances individually and ignore the inherent complicated structure in MPC which may provide crucial interlocutor and utterance semantics and would enhance the conversation understanding process. To this end, we present MPC-BERT, a pre-trained model for MPC understanding that considers learning who says what to whom in a unified model with several elaborated self-supervised tasks. Particularly, these tasks can be generally categorized into (1) interlocutor structure modeling including reply-to utterance recognition, identical speaker searching and pointer consistency distinction, and (2) utterance semantics modeling including masked shared utterance restoration and shared node detection. We evaluate MPC-BERT on three downstream tasks including addressee recognition, speaker identification and response selection. Experimental results show that MPC-BERT outperforms previous methods by large margins and achieves new state-of-the-art performance on all three downstream tasks at two benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2023

GIFT: Graph-Induced Fine-Tuning for Multi-Party Conversation Understanding

Addressing the issues of who saying what to whom in multi-party conversa...
research
05/16/2022

PRISM: Pre-trained Indeterminate Speaker Representation Model for Speaker Diarization and Speaker Verification

Speaker embedding has been a fundamental feature for speaker-related tas...
research
05/22/2023

MADNet: Maximizing Addressee Deduction Expectation for Multi-Party Conversation Generation

Modeling multi-party conversations (MPCs) with graph neural networks has...
research
03/13/2023

Analysing the Masked predictive coding training criterion for pre-training a Speech Representation Model

Recent developments in pre-trained speech representation utilizing self-...
research
04/08/2020

DialBERT: A Hierarchical Pre-Trained Model for Conversation Disentanglement

Disentanglement is a problem in which multiple conversations occur in th...
research
10/27/2022

Conversation Disentanglement with Bi-Level Contrastive Learning

Conversation disentanglement aims to group utterances into detached sess...
research
05/20/2020

A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition

Building a good speech recognition system usually requires large amounts...

Please sign up or login with your details

Forgot password? Click here to reset