DeepAI AI Chat
Log In Sign Up

Enabling Language Models to Fill in the Blanks

by   Chris Donahue, et al.

We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document. While infilling could enable rich functionality especially for writing assistance tools, more attention has been devoted to language modeling—a special case of infilling where text is predicted at the end of a document. In this paper, we aim to extend the capabilities of language models (LMs) to the more general task of infilling. To this end, we train (or fine-tune) off-the-shelf LMs on sequences containing the concatenation of artificially-masked text and the text which was masked. We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics. Furthermore, we show that humans have difficulty identifying sentences infilled by our approach as machine-generated in the domain of short stories.


page 1

page 2

page 3

page 4


Understanding by Understanding Not: Modeling Negation in Language Models

Negation is a core construction in natural language. Despite being very ...

Language modeling via stochastic processes

Modern language models can generate high-quality short texts. However, t...

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...

IGA : An Intent-Guided Authoring Assistant

While large-scale pretrained language models have significantly improved...

Clinical Predictive Keyboard using Statistical and Neural Language Modeling

A language model can be used to predict the next word during authoring, ...

Go Forth and Prosper: Language Modeling with Ancient Textual History

We introduce a technique for improving document-level language models (L...

Adapting Language Models to Compress Contexts

Transformer-based language models (LMs) are powerful and widely-applicab...