Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics

11/02/2020
by   Daniel Hershcovich, et al.
0

Building robust natural language understanding systems will require a clear characterization of whether and how various linguistic meaning representations complement each other. To perform a systematic comparative analysis, we evaluate the mapping between meaning representations from different frameworks using two complementary methods: (i) a rule-based converter, and (ii) a supervised delexicalized parser that parses to one framework using only information from the other as features. We apply these methods to convert the STREUSLE corpus (with syntactic and lexical semantic annotations) to UCCA (a graph-structured full-sentence meaning representation). Both methods yield surprisingly accurate target representations, close to fully supervised UCCA parser quality—indicating that UCCA annotations are partially redundant with STREUSLE annotations. Despite this substantial convergence between frameworks, we find several important areas of divergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/07/2020

Learning a natural-language to LTL executable semantic parser for grounded robotics

Children acquire their native language with apparent ease by observing h...
07/03/2018

Simpler but More Accurate Semantic Dependency Parsing

While syntactic dependency annotations concentrate on the surface or fun...
02/23/2018

Evaluating Scoped Meaning Representations

Semantic parsing offers many opportunities to improve natural language u...
10/24/2019

GF + MMT = GLF – From Language to Semantics through LF

These days, vast amounts of knowledge are available online, most of it i...
06/04/2019

Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG

Neural natural language generation (NNLG) from structured meaning repres...

Please sign up or login with your details

Forgot password? Click here to reset