DeepAI
Log In Sign Up

Go Forth and Prosper: Language Modeling with Ancient Textual History

04/18/2021
by   Rik Koncel-Kedziorski, et al.
0

We introduce a technique for improving document-level language models (LM) by leveraging "ancient history": text that is outside the LM's current context window. We learn an auxiliary function to select spans from the ancient history which can help the LM to predict future text. The selected text spans are then copied directly into the LM's context window, replacing less predictive spans. This method can improve perplexity of pretrained LMs with no updates to the LM's own parameters. We further observe that an auxiliary function trained in a specific textual domain like Wikipedia will also work in a substantially different domain such as scientific publications. With this technique we see a 7 percent perplexity reduction on Wikipedia articles, and a 12 percent perplexity reduction on scientific texts.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/11/2020

Enabling Language Models to Fill in the Blanks

We present a simple approach for text infilling, the task of predicting ...
03/15/2022

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...
09/14/2019

ALTER: Auxiliary Text Rewriting Tool for Natural Language Generation

In this paper, we describe ALTER, an auxiliary text rewriting tool that ...
01/14/2017

Hedera: Scalable Indexing and Exploring Entities in Wikipedia Revision History

Much of work in semantic web relying on Wikipedia as the main source of ...
05/24/2017

Analysing Timelines of National Histories across Wikipedia Editions: A Comparative Computational Approach

Portrayals of history are never complete, and each description inherentl...
11/10/2012

Dating Texts without Explicit Temporal Cues

This paper tackles temporal resolution of documents, such as determining...