AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

10/29/2020
by   Taylor Shin, et al.
0

The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to write suitable prompts. To address this, we develop AutoPrompt, an automated method to create prompts for a diverse set of tasks, based on a gradient-guided search. Using AutoPrompt, we show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning, sometimes achieving performance on par with recent state-of-the-art supervised models. We also show that our prompts elicit more accurate factual knowledge from MLMs than the manually created prompts on the LAMA benchmark, and that MLMs can be used as relation extractors more effectively than supervised relation extraction models. These results demonstrate that automatically generated prompts are a viable parameter-free alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/14/2020

Unsupervised Relation Extraction from Language Models using Constrained Cloze Completion

We show that state-of-the-art self-supervised language models can be rea...
07/31/2022

Improving Distantly Supervised Relation Extraction by Natural Language Inference

To reduce human annotations for relation extraction (RE) tasks, distantl...
11/08/2019

Negated LAMA: Birds cannot fly

Pretrained language models have achieved remarkable improvements in a br...
10/23/2020

BARThez: a Skilled Pretrained French Sequence-to-Sequence Model

Inductive transfer learning, enabled by self-supervised learning, have t...
02/02/2022

Understanding Knowledge Integration in Language Models with Graph Convolutions

Pretrained language models (LMs) do not capture factual knowledge very w...
02/10/2021

Language Models for Lexical Inference in Context

Lexical inference in context (LIiC) is the task of recognizing textual e...
11/28/2019

How Can We Know What Language Models Know?

Recent work has presented intriguing results examining the knowledge con...