The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

11/07/2015
by   Felix Hill, et al.
0

We introduce a new test of how well language models capture meaning in children's books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2021

Syntactic Persistence in Language Models: Priming as a Window into Abstract Language Representations

We investigate the extent to which modern, neural language models are su...
research
02/02/2022

Language Models Explain Word Reading Times Better Than Empirical Predictability

Though there is a strong consensus that word length and frequency are th...
research
10/16/2018

Named Entity Analysis and Extraction with Uncommon Words

Most previous research treats named entity extraction and classification...
research
09/10/2019

Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

Neural language models have achieved state-of-the-art performances on ma...
research
05/10/2021

Language Acquisition is Embodied, Interactive, Emotive: a Research Proposal

Humans' experience of the world is profoundly multimodal from the beginn...
research
09/26/2019

Pre-train, Interact, Fine-tune: A Novel Interaction Representation for Text Classification

Text representation can aid machines in understanding text. Previous wor...
research
04/10/2017

Pay Attention to Those Sets! Learning Quantification from Images

Major advances have recently been made in merging language and vision re...

Please sign up or login with your details

Forgot password? Click here to reset