LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER

08/31/2021
by   Xiang Chen, et al.
0

Most existing NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data. Recently, prompt-tuning methods for pre-trained language models have achieved remarkable performance in few-shot learning by exploiting prompts as task guidance to reduce the gap between training progress and downstream tuning. Inspired by prompt learning, we propose a novel lightweight generative framework with prompt-guided attention for low-resource NER (LightNER). Specifically, we construct the semantic-aware answer space of entity categories for prompt learning to generate the entity span sequence and entity categories without any label-specific classifiers. We further propose prompt-guided attention by incorporating continuous prompts into the self-attention layer to re-modulate the attention and adapt pre-trained weights. Note that we only tune those continuous prompts with the whole parameter of the pre-trained language model fixed, thus, making our approach lightweight and flexible for low-resource scenarios and can better transfer knowledge across domains. Experimental results show that LightNER can obtain comparable performance in the standard supervised setting and outperform strong baselines in low-resource settings by tuning only a small part of the parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2021

NER-BERT: A Pre-trained Model for Low-Resource Entity Tagging

Named entity recognition (NER) models generally perform poorly when larg...
research
04/11/2022

A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition

Pre-trained language models (PLM) are effective components of few-shot n...
research
10/22/2022

Generative Prompt Tuning for Relation Classification

Using prompts to explore the knowledge contained within pre-trained lang...
research
03/02/2023

MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question Answering

Recently, finetuning pretrained vision-language models (VLMs) has become...
research
08/06/2023

PromptSum: Parameter-Efficient Controllable Abstractive Summarization

Prompt tuning (PT), a parameter-efficient technique that only tunes the ...
research
08/28/2018

Deriving Machine Attention from Human Rationales

Attention-based models are successful when trained on large amounts of d...

Please sign up or login with your details

Forgot password? Click here to reset