Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning

05/31/2019
by   Tahira Naseem, et al.
0

Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs. In addition, we also combined several AMR-to-text alignments with an attention mechanism and we supplemented the parser with pre-processed concept identification, named entities and contextualized embeddings. We achieve a highly competitive performance that is comparable to the best published results. We show an in-depth study ablating each of the new components of the parser

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2017

AMR Parsing using Stack-LSTMs

We present a transition-based AMR parser that directly generates AMR par...
research
10/20/2020

Transition-based Parsing with Stack-Transformers

Modeling the parser state is key to good performance in transition-based...
research
12/15/2016

Transition-based Parsing with Context Enhancement and Future Reward Reranking

This paper presents a novel reranking model, future reward reranking, to...
research
10/08/2018

An AMR Aligner Tuned by Transition-based Parser

In this paper, we propose a new rich resource enhanced AMR aligner which...
research
12/15/2021

Learning to Transpile AMR into SPARQL

We propose a transition-based system to transpile Abstract Meaning Repre...
research
03/20/2019

Left-to-Right Dependency Parsing with Pointer Networks

We propose a novel transition-based algorithm that straightforwardly par...
research
03/11/2016

Training with Exploration Improves a Greedy Stack-LSTM Parser

We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) t...

Please sign up or login with your details

Forgot password? Click here to reset