Structural Pre-training for Dialogue Comprehension

05/23/2021
by   Zhuosheng Zhang, et al.
0

Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training. However, even with the help of the powerful PrLMs, it is still challenging to effectively capture task-related knowledge from dialogue texts which are enriched by correlations among speaker-aware utterances. In this work, we present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features. To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives: 1) utterance order restoration, which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regularization, which regularizes the model to improve the factual correctness of summarized subject-verb-object triplets. Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2020

Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation

Pre-trained Language Models (PrLMs) have been widely used as backbones i...
research
09/22/2021

DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder

With the rapid development of artificial intelligence, conversational bo...
research
05/21/2021

Towards a Universal NLG for Dialogue Systems and Simulators with Future Bridging

In a dialogue system pipeline, a natural language generation (NLG) unit ...
research
03/02/2023

Matching-based Term Semantics Pre-training for Spoken Patient Query Understanding

Medical Slot Filling (MSF) task aims to convert medical queries into str...
research
06/30/2019

Self-Supervised Dialogue Learning

The sequential order of utterances is often meaningful in coherent dialo...
research
07/08/2022

DSTEA: Dialogue State Tracking with Entity Adaptive Pre-training

Dialogue state tracking (DST) is a core sub-module of a dialogue system,...
research
06/01/2021

Dialogue-oriented Pre-training

Pre-trained language models (PrLM) has been shown powerful in enhancing ...

Please sign up or login with your details

Forgot password? Click here to reset