Recurrent Neural Network Grammars

02/25/2016
by   Chris Dyer, et al.
0

We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2019

Think Again Networks, the Delta Loss, and an Application in Language Modeling

This short paper introduces an abstraction called Think Again Networks (...
research
08/01/2017

A Generative Parser with a Discriminative Recognition Algorithm

Generative models defining joint distributions over parse trees and sent...
research
04/07/2019

Unsupervised Recurrent Neural Network Grammars

Recurrent neural network grammars (RNNG) are generative models of langua...
research
11/17/2016

What Do Recurrent Neural Network Grammars Learn About Syntax?

Recurrent neural network grammars (RNNG) are a recently proposed probabi...
research
03/02/2020

Tensor Networks for Language Modeling

The tensor network formalism has enjoyed over two decades of success in ...
research
08/27/2016

Multi-Path Feedback Recurrent Neural Network for Scene Parsing

In this paper, we consider the scene parsing problem and propose a novel...
research
02/27/2019

Alternating Synthetic and Real Gradients for Neural Language Modeling

Training recurrent neural networks (RNNs) with backpropagation through t...

Please sign up or login with your details

Forgot password? Click here to reset