SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization

03/24/2023
by   Yi-Syuan Chen, et al.
0

Neural abstractive summarization has been widely studied and achieved great success with large-scale corpora. However, the considerable cost of annotating data motivates the need for learning strategies under low-resource settings. In this paper, we investigate the problems of learning summarizers with only few examples and propose corresponding methods for improvements. First, typical transfer learning methods are prone to be affected by data properties and learning objectives in the pretext tasks. Therefore, based on pretrained language models, we further present a meta learning framework to transfer few-shot learning processes from source corpora to the target corpus. Second, previous methods learn from training examples without decomposing the content and preference. The generated summaries could therefore be constrained by the preference bias in the training set, especially under low-resource settings. As such, we propose decomposing the contents and preferences during learning through the parameter modulation, which enables control over preferences during inference. Third, given a target application, specifying required preferences could be non-trivial because the preferences may be difficult to derive through observations. Therefore, we propose a novel decoding method to automatically estimate suitable preferences and generate corresponding summary candidates from the few training examples. Extensive experiments demonstrate that our methods achieve state-of-the-art performance on six diverse corpora with 30.11 ROUGE-1/2/L under 10- and 100-example settings.

READ FULL TEXT

page 1

page 13

page 16

research
02/18/2021

Meta-Transfer Learning for Low-Resource Abstractive Summarization

Neural abstractive summarization has been studied in many pieces of lite...
research
07/07/2022

Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation

Large pretrained language models (PLMs) are often domain- or task-adapte...
research
02/15/2017

Transfer Deep Learning for Low-Resource Chinese Word Segmentation with a Novel Neural Network

Recent studies have shown effectiveness in using neural networks for Chi...
research
08/25/2018

Meta-Learning for Low-Resource Neural Machine Translation

In this paper, we propose to extend the recently introduced model-agnost...
research
09/10/2023

Retrieval-Augmented Meta Learning for Low-Resource Text Classification

Meta learning have achieved promising performance in low-resource text c...
research
12/03/2022

CoP: Factual Inconsistency Detection by Controlling the Preference

Abstractive summarization is the process of generating a summary given a...
research
10/21/2022

Low-Resources Project-Specific Code Summarization

Code summarization generates brief natural language descriptions of sour...

Please sign up or login with your details

Forgot password? Click here to reset