Leverage Unlabeled Data for Abstractive Speech Summarization with Self-Supervised Learning and Back-Summarization

07/30/2020
by   Paul Tardy, et al.
0

Supervised approaches for Neural Abstractive Summarization require large annotated corpora that are costly to build. We present a French meeting summarization task where reports are predicted based on the automatic transcription of the meeting audio recordings. In order to build a corpus for this task, it is necessary to obtain the (automatic or manual) transcription of each meeting, and then to segment and align it with the corresponding manual report to produce training examples suitable for training. On the other hand, we have access to a very large amount of unaligned data, in particular reports without corresponding transcription. Reports are professionally written and well formatted making pre-processing straightforward. In this context, we study how to take advantage of this massive amount of unaligned data using two approaches (i) self-supervised pre-training using a target-side denoising encoder-decoder model; (ii) back-summarization i.e. reversing the summarization process by learning to predict the transcription given the report, in order to align single reports with generated transcription, and use this synthetic dataset for further training. We report large improvements compared to the previous baseline (trained on aligned data only) for both approaches on two evaluation sets. Moreover, combining the two gives even better results, outperforming the baseline by a large margin of +6 ROUGE-1 and ROUGE-L and +5 ROUGE-2 on two evaluation sets

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/15/2020

Align then Summarize: Automatic Alignment Methods for Summarization Corpus Creation

Summarizing texts is not a straightforward task. Before even considering...
research
12/18/2019

PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

Recent work pre-training Transformers with self-supervised objectives on...
research
06/11/2019

Self-Supervised Learning for Contextualized Extractive Summarization

Existing models for extractive summarization are usually trained from sc...
research
08/15/2019

Multi-Task Self-Supervised Learning for Disfluency Detection

Most existing approaches to disfluency detection heavily rely on human-a...
research
12/05/2019

Self-Supervised Contextual Language Representation of Radiology Reports to Improve the Identification of Communication Urgency

Machine learning methods have recently achieved high-performance in biom...
research
02/28/2023

Automatic Scoring of Dream Reports' Emotional Content with Large Language Models

In the field of dream research, the study of dream content typically rel...
research
09/26/2020

TraceSim: A Method for Calculating Stack Trace Similarity

Many contemporary software products have subsystems for automatic crash ...

Please sign up or login with your details

Forgot password? Click here to reset