Representation Learning for Resource-Constrained Keyphrase Generation

03/15/2022
by   Di Wu, et al.
0

State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with constrained resources. To overcome this challenge, we investigate strategies to learn an intermediate representation suitable for the keyphrase generation task. We introduce salient span recovery and salient span prediction as guided denoising language modeling objectives that condense the domain-specific knowledge essential for keyphrase generation. Through experiments on multiple scientific keyphrase generation benchmarks, we show the effectiveness of the proposed approach for facilitating low-resource and zero-shot keyphrase generation. Furthermore, we observe that our method especially benefits the generation of absent keyphrases, approaching the performance of SOTA methods trained with large training sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2023

Zero-Shot Co-salient Object Detection Framework

Co-salient Object Detection (CoSOD) endeavors to replicate the human vis...
research
03/22/2023

Salient Span Masking for Temporal Understanding

Salient Span Masking (SSM) has shown itself to be an effective strategy ...
research
10/16/2021

A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models

Large pretrained vision-language (VL) models can learn a new task with a...
research
10/21/2022

Low-Resource Multilingual and Zero-Shot Multispeaker TTS

While neural methods for text-to-speech (TTS) have shown great advances ...
research
10/15/2021

DS-TOD: Efficient Domain Specialization for Task Oriented Dialog

Recent work has shown that self-supervised dialog-specific pretraining o...
research
09/20/2021

Improving Span Representation for Domain-adapted Coreference Resolution

Recent work has shown fine-tuning neural coreference models can produce ...
research
10/10/2022

Leveraging Key Information Modeling to Improve Less-Data Constrained News Headline Generation via Duality Fine-Tuning

Recent language generative models are mostly trained on large-scale data...

Please sign up or login with your details

Forgot password? Click here to reset