Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension

03/19/2022
by   Chao Zhao, et al.
0

Comprehending a dialogue requires a model to capture diverse kinds of key information in the utterances, which are either scattered around or implicitly implied in different turns of conversations. Therefore, dialogue comprehension requires diverse capabilities such as paraphrasing, summarizing, and commonsense reasoning. Towards the objective of pre-training a zero-shot dialogue comprehension model, we develop a novel narrative-guided pre-training strategy that learns by narrating the key information from a dialogue input. However, the dialogue-narrative parallel corpus for such a pre-training strategy is currently unavailable. For this reason, we first construct a dialogue-narrative parallel corpus by automatically aligning movie subtitles and their synopses. We then pre-train a BART model on the data and evaluate its performance on four dialogue-based tasks that require comprehension. Experimental results show that our model not only achieves superior zero-shot performance but also exhibits stronger fine-grained dialogue comprehension capabilities. The data and code are available at https://github.com/zhaochaocs/Diana

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2022

DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization

Dialogue summarization has recently garnered significant attention due t...
research
10/25/2021

Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection

Dialogue disentanglement aims to group utterances in a long and multi-pa...
research
08/18/2022

MulZDG: Multilingual Code-Switching Framework for Zero-shot Dialogue Generation

Building dialogue generation systems in a zero-shot scenario remains a h...
research
05/18/2023

Causal Document-Grounded Dialogue Pre-training

The goal of document-grounded dialogue (DocGD) is to generate a response...
research
10/08/2022

On Task-Adaptive Pretraining for Dialogue Response Selection

Recent advancements in dialogue response selection (DRS) are based on th...
research
05/04/2023

Re^3Dial: Retrieve, Reorganize and Rescale Dialogue Corpus for Long-Turn Open-Domain Dialogue Pre-training

Large-scale open-domain dialogue data crawled from public social media h...
research
08/29/2023

Multi-party Goal Tracking with LLMs: Comparing Pre-training, Fine-tuning, and Prompt Engineering

This paper evaluates the extent to which current Large Language Models (...

Please sign up or login with your details

Forgot password? Click here to reset