CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning

12/16/2021
by   Xiangru Tang, et al.
8

Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained models, substantial amounts of hallucinated content are found during the human evaluation. Pre-trained models are most commonly fine-tuned with cross-entropy loss for text summarization, which may not be an optimal strategy. In this work, we provide a typology of factual errors with annotation data to highlight the types of errors and move away from a binary understanding of factuality. We further propose a training strategy that improves the factual consistency and overall quality of summaries via a novel contrastive fine-tuning, called ConFiT. Based on our linguistically-informed typology of errors, we design different modular objectives that each target a specific type. Specifically, we utilize hard negative samples with errors to reduce the generation of factual inconsistency. In order to capture the key information between speakers, we also design a dialogue-specific loss. Using human evaluation and automatic faithfulness metrics, we show that our model significantly reduces all kinds of factual errors on the dialogue summarization, SAMSum corpus. Moreover, our model could be generalized to the meeting summarization, AMI corpus, and it produces significantly higher scores than most of the baselines on both datasets regarding word-overlap metrics.

READ FULL TEXT

page 2

page 3

page 5

page 6

page 7

page 9

page 10

page 11

research
10/21/2022

Analyzing and Evaluating Faithfulness in Dialogue Summarization

Dialogue summarization is abstractive in nature, making it suffer from f...
research
12/20/2022

DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization

Dialogue summarization has recently garnered significant attention due t...
research
05/04/2022

Efficient Few-Shot Fine-Tuning for Opinion Summarization

Abstractive summarization models are typically pre-trained on large amou...
research
05/08/2023

GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from Doctor-Patient Conversations through Fine-tuning and In-context Learning

This paper presents our contribution to the MEDIQA-2023 Dialogue2Note sh...
research
10/15/2021

Training Dynamics for Text Summarization Models

Pre-trained language models (e.g. BART) have shown impressive results wh...
research
05/20/2023

Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer

In real-world scenarios, labeled samples for dialogue summarization are ...

Please sign up or login with your details

Forgot password? Click here to reset