PaLM: A Hybrid Parser and Language Model

09/04/2019
by   Hao Peng, et al.
0

We present PaLM, a hybrid parser and neural language model. Building on an RNN language model, PaLM adds an attention layer over text spans in the left context. An unsupervised constituency parser can be derived from its attention weights, using a greedy decoding algorithm. We evaluate PaLM on language modeling, and empirically show that it outperforms strong baselines. If syntactic annotations are available, the attention component can be trained in a supervised manner, providing syntactically-informed representations of the context, and further improving language modeling performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2018

Meta-Learning a Dynamical Language Model

We consider the task of word-level language modeling and study the possi...
research
03/07/2019

Neural Language Modeling with Visual Features

Multimodal language models attempt to incorporate non-linguistic feature...
research
11/02/2017

Neural Language Modeling by Jointly Learning Syntax and Lexicon

We propose a neural language model capable of unsupervised syntactic str...
research
04/07/2019

Unsupervised Recurrent Neural Network Grammars

Recurrent neural network grammars (RNNG) are generative models of langua...
research
06/10/2022

Sort by Structure: Language Model Ranking as Dependency Probing

Making an informed choice of pre-trained language model (LM) is critical...
research
03/15/2018

Advancing Connectionist Temporal Classification With Attention Modeling

In this study, we propose advancing all-neural speech recognition by dir...
research
11/26/2019

Single Headed Attention RNN: Stop Thinking With Your Head

The leading approaches in language modeling are all obsessed with TV sho...

Please sign up or login with your details

Forgot password? Click here to reset