Keyphrase Prediction With Pre-trained Language Model

04/22/2020
by   Rui Liu, et al.
0

Recently, generative methods have been widely used in keyphrase prediction, thanks to their capability to produce both present keyphrases that appear in the source text and absent keyphrases that do not match any source text. However, the absent keyphrases are generated at the cost of the performance on present keyphrase prediction, since previous works mainly use generative models that rely on the copying mechanism and select words step by step. Besides, the extractive model that directly extracts a text span is more suitable for predicting the present keyphrase. Considering the different characteristics of extractive and generative methods, we propose to divide the keyphrase prediction into two subtasks, i.e., present keyphrase extraction (PKE) and absent keyphrase generation (AKG), to fully exploit their respective advantages. On this basis, a joint inference framework is proposed to make the most of BERT in two subtasks. For PKE, we tackle this task as a sequence labeling problem with the pre-trained language model BERT. For AKG, we introduce a Transformer-based architecture, which fully integrates the present keyphrase knowledge learned from PKE by the fine-tuned BERT. The experimental results show that our approach can achieve state-of-the-art results on both tasks on benchmark datasets.

READ FULL TEXT
research
07/05/2023

Emoji Prediction using Transformer Models

In recent years, the use of emojis in social media has increased dramati...
research
08/03/2020

LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT?

This paper presents the different models submitted by the LT@Helsinki te...
research
04/10/2022

Pushing on Personality Detection from Verbal Behavior: A Transformer Meets Text Contours of Psycholinguistic Features

Research at the intersection of personality psychology, computer science...
research
11/10/2019

Distilling the Knowledge of BERT for Text Generation

Large-scale pre-trained language model, such as BERT, has recently achie...
research
02/14/2020

Transformer on a Diet

Transformer has been widely used thanks to its ability to capture sequen...
research
04/15/2022

Text Revision by On-the-Fly Representation Optimization

Text revision refers to a family of natural language generation tasks, w...
research
05/17/2019

Story Ending Prediction by Transferable BERT

Recent advances, such as GPT and BERT, have shown success in incorporati...

Please sign up or login with your details

Forgot password? Click here to reset