Improving Neural Language Models by Segmenting, Attending, and Predicting the Future

06/04/2019
by   Hongyin Luo, et al.
0

Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2022

PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search

Since BERT (Devlin et al., 2018), learning contextualized word embedding...
research
10/13/2017

Learning Phrase Embeddings from Paraphrases with GRUs

Learning phrase representations has been widely explored in many Natural...
research
08/31/2019

Behavior Gated Language Models

Most current language modeling techniques only exploit co-occurrence, se...
research
10/07/2022

Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models

Compositionality, the phenomenon where the meaning of a phrase can be de...
research
03/16/2020

Key Phrase Classification in Complex Assignments

Complex assignments typically consist of open-ended questions with large...
research
04/09/2023

An investigation of speaker independent phrase break models in End-to-End TTS systems

This paper presents our work on phrase break prediction in the context o...
research
09/25/2020

Visually Grounded Compound PCFGs

Exploiting visual groundings for language understanding has recently bee...

Please sign up or login with your details

Forgot password? Click here to reset