Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data

03/08/2021
by   Sanjeev Kumar Karn, et al.
13

Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions. Existing systems first disentangle the posts by threads and then extract summaries from those threads. A major issue with such systems is error propagation from the disentanglement component. While end-to-end trainable summarization system could obviate explicit disentanglement, such systems require a large amount of labeled data. To address this, we propose to pretrain an end-to-end trainable hierarchical encoder-decoder system using synthetic interleaved texts. We show that by fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a traditional two-step system by 22 and observed that pretraining with synthetic data both the encoder and decoder outperforms the BertSumExtAbs transformer model which pretrains only the encoder on a large dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2019

Generating Multi-Sentence Abstractive Summaries of Interleaved Texts

In multi-participant postings, as in online chat conversations, several ...
research
05/13/2022

ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation

We present ViT5, a pretrained Transformer-based encoder-decoder model fo...
research
05/23/2022

Decoder Denoising Pretraining for Semantic Segmentation

Semantic segmentation labels are expensive and time consuming to acquire...
research
09/18/2018

Bidirectional Attentional Encoder-Decoder Model and Bidirectional Beam Search for Abstractive Summarization

Sequence generative models with RNN variants, such as LSTM, GRU, show pr...
research
10/12/2020

End-to-End Synthetic Data Generation for Domain Adaptation of Question Answering Systems

We propose an end-to-end approach for synthetic QA data generation. Our ...
research
07/27/2023

Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners

In-context learning, which offers substantial advantages over fine-tunin...
research
05/26/2023

Automated Summarization of Stack Overflow Posts

Software developers often resort to Stack Overflow (SO) to fill their pr...

Please sign up or login with your details

Forgot password? Click here to reset