Broad-Coverage Semantic Parsing as Transduction

09/05/2019
by   Sheng Zhang, et al.
0

We unify different broad-coverage semantic parsing tasks under a transduction paradigm, and propose an attention-based neural framework that incrementally builds a meaning representation via a sequence of semantic relations. By leveraging multiple attention mechanisms, the transducer can be effectively trained without relying on a pre-trained aligner. Experiments conducted on three separate broad-coverage semantic parsing tasks -- AMR, SDP and UCCA -- demonstrate that our attention-based neural transducer improves the state of the art on both AMR and UCCA, and is competitive with the state of the art on SDP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2019

AMR Parsing as Sequence-to-Graph Transduction

We propose an attention-based model that treats AMR parsing as sequence-...
research
06/27/2019

Semantic expressive capacity with bounded memory

We investigate the capacity of mechanisms for compositional semantic par...
research
06/11/2016

Data Recombination for Neural Semantic Parsing

Modeling crisp logical regularities is crucial in semantic parsing, maki...
research
07/06/2020

A Broad-Coverage Deep Semantic Lexicon for Verbs

Progress on deep language understanding is inhibited by the lack of a br...
research
10/19/2017

SLING: A framework for frame semantic parsing

We describe SLING, a framework for parsing natural language into semanti...
research
04/19/2022

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

As Abstract Meaning Representation (AMR) implicitly involves compound se...
research
10/22/2019

Multimodal Learning For Classroom Activity Detection

Classroom activity detection (CAD) focuses on accurately classifying whe...

Please sign up or login with your details

Forgot password? Click here to reset