Pre-trained Language Models for Keyphrase Generation: A Thorough Empirical Study

12/20/2022
by   Di Wu, et al.
0

Neural models that do not rely on pre-training have excelled in the keyphrase generation task with large annotated datasets. Meanwhile, new approaches have incorporated pre-trained language models (PLMs) for their data efficiency. However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models. To fill in this knowledge gap and facilitate a more informed use of PLMs for keyphrase extraction and keyphrase generation, we present an in-depth empirical study. Formulating keyphrase extraction as sequence labeling and keyphrase generation as sequence-to-sequence generation, we perform extensive experiments in three domains. After showing that PLMs have competitive high-resource performance and state-of-the-art low-resource performance, we investigate important design choices including in-domain PLMs, PLMs with different pre-training objectives, using PLMs with a parameter budget, and different formulations for present keyphrases. Further results show that (1) in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models; (2) with a fixed parameter budget, prioritizing model depth over width and allocating more layers in the encoder leads to better encoder-decoder models; and (3) introducing four in-domain PLMs, we achieve a competitive performance in the news domain and the state-of-the-art performance in the scientific domain.

READ FULL TEXT
research
10/19/2022

Self-supervised Graph Masking Pre-training for Graph-to-Text Generation

Large-scale pre-trained language models (PLMs) have advanced Graph-to-Te...
research
09/20/2023

Sequence-to-Sequence Spanish Pre-trained Language Models

In recent years, substantial advancements in pre-trained language models...
research
05/07/2019

MASS: Masked Sequence to Sequence Pre-training for Language Generation

Pre-training and fine-tuning, e.g., BERT, have achieved great success in...
research
08/07/2023

A Cross-Domain Evaluation of Approaches for Causal Knowledge Extraction

Causal knowledge extraction is the task of extracting relevant causes an...
research
08/19/2019

Tale of tails using rule augmented sequence labeling for event extraction

The problem of event extraction is a relatively difficult task for low r...
research
09/22/2020

An Empirical Study on Neural Keyphrase Generation

Recent years have seen a flourishing of neural keyphrase generation work...
research
11/29/2021

PSG: Prompt-based Sequence Generation for Acronym Extraction

Acronym extraction aims to find acronyms (i.e., short-forms) and their m...

Please sign up or login with your details

Forgot password? Click here to reset