DeepAI AI Chat
Log In Sign Up

Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data

03/08/2021
by   Sanjeev Kumar Karn, et al.
13

Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions. Existing systems first disentangle the posts by threads and then extract summaries from those threads. A major issue with such systems is error propagation from the disentanglement component. While end-to-end trainable summarization system could obviate explicit disentanglement, such systems require a large amount of labeled data. To address this, we propose to pretrain an end-to-end trainable hierarchical encoder-decoder system using synthetic interleaved texts. We show that by fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a traditional two-step system by 22 and observed that pretraining with synthetic data both the encoder and decoder outperforms the BertSumExtAbs transformer model which pretrains only the encoder on a large dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/05/2019

Generating Multi-Sentence Abstractive Summaries of Interleaved Texts

In multi-participant postings, as in online chat conversations, several ...
05/13/2022

ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation

We present ViT5, a pretrained Transformer-based encoder-decoder model fo...
05/23/2022

Decoder Denoising Pretraining for Semantic Segmentation

Semantic segmentation labels are expensive and time consuming to acquire...
10/12/2020

End-to-End Synthetic Data Generation for Domain Adaptation of Question Answering Systems

We propose an end-to-end approach for synthetic QA data generation. Our ...
09/09/2021

Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining

With the rapid increase in the volume of dialogue data from daily life, ...
06/27/2020

Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization

Neural models have become successful at producing abstractive summaries ...
07/02/2022

An End-to-End Set Transformer for User-Level Classification of Depression and Gambling Disorder

This work proposes a transformer architecture for user-level classificat...