PTR: Prompt Tuning with Rules for Text Classification

05/24/2021
by   Xu Han, et al.
0

Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream task. Prompt tuning has achieved promising results on some few-class classification tasks such as sentiment classification and natural language inference. However, manually designing lots of language prompts is cumbersome and fallible. For those auto-generated prompts, it is also expensive and time-consuming to verify their effectiveness in non-few-shot scenarios. Hence, it is challenging for prompt tuning to address many-class classification tasks. To this end, we propose prompt tuning with rules (PTR) for many-class text classification, and apply logic rules to construct prompts with several sub-prompts. In this way, PTR is able to encode prior knowledge of each class into prompt tuning. We conduct experiments on relation classification, a typical many-class classification task, and the results on benchmarks show that PTR can significantly and consistently outperform existing state-of-the-art baselines. This indicates that PTR is a promising approach to take advantage of PLMs for those complicated classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2022

Towards Unified Prompt Tuning for Few-shot Text Classification

Prompt-based fine-tuning has boosted the performance of Pre-trained Lang...
research
09/20/2022

A Few-shot Approach to Resume Information Extraction via Prompts

Prompt learning has been shown to achieve near-Fine-tune performance in ...
research
11/30/2022

Learning Label Modular Prompts for Text Classification in the Wild

Machine learning models usually assume i.i.d data during training and te...
research
07/15/2021

Uncertainty-Aware Reliable Text Classification

Deep neural networks have significantly contributed to the success in pr...
research
03/18/2022

Prototypical Verbalizer for Prompt-based Few-shot Tuning

Prompt-based tuning for pre-trained language models (PLMs) has shown its...
research
09/08/2019

Transfer Learning Robustness in Multi-Class Categorization by Fine-Tuning Pre-Trained Contextualized Language Models

This study compares the effectiveness and robustness of multi-class cate...
research
10/22/2022

Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks

Large pretrained Transformer-based language models like BERT and GPT hav...

Please sign up or login with your details

Forgot password? Click here to reset