Context-Tuning: Learning Contextualized Prompts for Natural Language Generation

01/21/2022
by   Tianyi Tang, et al.
1

Recently, pretrained language models (PLMs) have made exceptional success in language generation. To leverage the rich knowledge encoded by PLMs, a simple yet powerful mechanism is to use prompts, in the form of either discrete tokens or continuous embeddings. In existing studies, manual prompts are time-consuming and require domain expertise, while continuous prompts are typically independent of the inputs. To address this issue, we propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation. Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation. We refer to such prompts as contextualized prompts. Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation by modeling an inverse generation process from output to input. Moreover, we propose a lightweight contexttuning, fine-tuning only 0.4 parameters while retaining well performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2021

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Fine-tuning is the de facto way to leverage large pretrained language mo...
research
03/07/2022

Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models

Recently the prompt-tuning paradigm has attracted significant attention....
research
06/27/2023

KnowPrefix-Tuning: A Two-Stage Prefix-Tuning Framework for Knowledge-Grounded Dialogue Generation

Existing knowledge-grounded conversation systems generate responses typi...
research
06/11/2023

Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective

Molecule discovery plays a crucial role in various scientific fields, ad...
research
03/22/2016

Latent Predictor Networks for Code Generation

Many language generation tasks require the production of text conditione...
research
09/07/2021

Naturalness Evaluation of Natural Language Generation in Task-oriented Dialogues using BERT

This paper presents an automatic method to evaluate the naturalness of n...
research
06/06/2023

Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction

Many works employed prompt tuning methods to automatically optimize prom...

Please sign up or login with your details

Forgot password? Click here to reset