Few-Shot Natural Language Inference Generation with PDD: Prompt and Dynamic Demonstration

05/21/2022
by   Kaijian Li, et al.
0

Natural Language Inference Generation task is to generate a text hypothesis given a text premise and a logical relation between the two. This task can be used in data augmentation and controllable text generation in practice. In this paper, we propose language models with prompt and dynamic demonstration (LM-PDD) to tackle this problem in few-shot settings. Our framework outperforms standard fine-tuned models with low resource, achieving an average 8 improvement on SNLI and MNLI datasets, and the results on 13 natural language classification tasks also show that our dynamic demonstration method has good generalizability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2020

Few-Shot Text Generation with Pattern-Exploiting Training

Providing pretrained language models with simple task descriptions or pr...
research
04/22/2023

LaMP: When Large Language Models Meet Personalization

This paper highlights the importance of personalization in the current s...
research
04/09/2021

Text2Chart: A Multi-Staged Chart Generator from Natural Language Text

Generation of scientific visualization from analytical natural language ...
research
03/02/2023

Mixture of Soft Prompts for Controllable Data Generation

Large language models (LLMs) effectively generate fluent text when the t...
research
02/09/2023

One-shot Visual Imitation via Attributed Waypoints and Demonstration Augmentation

In this paper, we analyze the behavior of existing techniques and design...
research
03/25/2020

Heavy-tailed Representations, Text Polarity Classification Data Augmentation

The dominant approaches to text representation in natural language rely ...
research
02/16/2022

XFBoost: Improving Text Generation with Controllable Decoders

Multimodal conditionality in transformer-based natural language models h...

Please sign up or login with your details

Forgot password? Click here to reset