Training with Exploration Improves a Greedy Stack-LSTM Parser

03/11/2016
by   Miguel Ballesteros, et al.
0

We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles(Goldberg and Nivre, 2013) instead of cross-entropy minimization. This form of training, which accounts for model predictions at training time rather than assuming an error-free action history, improves parsing accuracies for both English and Chinese, obtaining very strong results for both languages. We discuss some modifications needed in order to get training with exploration to work well for a probabilistic neural-network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2017

AMR Parsing using Stack-LSTMs

We present a transition-based AMR parser that directly generates AMR par...
research
03/20/2019

Left-to-Right Dependency Parsing with Pointer Networks

We propose a novel transition-based algorithm that straightforwardly par...
research
05/29/2015

Transition-Based Dependency Parsing with Stack Long Short-Term Memory

We propose a technique for learning representations of parser states in ...
research
03/01/2016

Easy-First Dependency Parsing with Hierarchical Tree LSTMs

We suggest a compositional vector representation of parse trees that rel...
research
02/22/2017

Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

Error propagation is a common problem in NLP. Reinforcement learning exp...
research
05/31/2019

Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning

Our work involves enriching the Stack-LSTM transition-based AMR parser (...
research
07/11/2017

A non-projective greedy dependency parser with bidirectional LSTMs

The LyS-FASTPARSE team presents BIST-COVINGTON, a neural implementation ...

Please sign up or login with your details

Forgot password? Click here to reset