Getting More Out Of Syntax with PropS

03/04/2016
by   Gabriel Stanovsky, et al.
0

Semantic NLP applications often rely on dependency trees to recognize major elements of the proposition structure of sentences. Yet, while much semantic structure is indeed expressed by syntax, many phenomena are not easily read out of dependency trees, often leading to further ad-hoc heuristic post-processing or to information loss. To directly address the needs of semantic applications, we present PropS -- an output representation designed to explicitly and uniformly express much of the proposition structure which is implied from syntax, and an associated tool for extracting it from dependency trees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2022

Quantifying syntax similarity with a polynomial representation of dependency trees

We introduce a graph polynomial that distinguishes tree structures to re...
research
05/28/2018

Inducing Grammars with and for Neural Machine Translation

Machine translation systems require semantic knowledge and grammatical u...
research
09/21/2023

Is It Really Useful to Jointly Parse Constituency and Dependency Trees? A Revisit

This work visits the topic of jointly parsing constituency and dependenc...
research
08/20/2020

Do Syntax Trees Help Pre-trained Transformers Extract Information?

Much recent work suggests that incorporating syntax information from dep...
research
10/22/2022

SynGEC: Syntax-Enhanced Grammatical Error Correction with a Tailored GEC-Oriented Parser

This work proposes a syntax-enhanced grammatical error correction (GEC) ...
research
09/10/2023

Debugging Trait Errors as Logic Programs

Rust uses traits to define units of shared behavior. Trait constraints b...
research
11/15/2022

CSynGEC: Incorporating Constituent-based Syntax for Grammatical Error Correction with a Tailored GEC-Oriented Parser

Recently, Zhang et al. (2022) propose a syntax-aware grammatical error c...

Please sign up or login with your details

Forgot password? Click here to reset