
Deep Biaffine Attention for Neural Dependency Parsing
This paper builds off recent work from Kiperwasser & Goldberg (2016) usi...
read it

Sequential Graph Dependency Parser
We propose a method for nonprojective dependency parsing by incremental...
read it

Recursive NonAutoregressive GraphtoGraph Transformer for Dependency Parsing with Iterative Refinement
We propose the Recursive Nonautoregressive Graphtograph Transformer a...
read it

Splitting EUD graphs into trees: A quick and clatty approach
We present the system submission from the FASTPARSE team for the EUD Sha...
read it

Fill it up: Exploiting partial dependency annotations in a minimum spanning tree parser
Unsupervised models of dependency parsing typically require large amount...
read it

Understanding Unnatural Questions Improves Reasoning over Text
Complex question answering (CQA) over raw text is a challenging task. A ...
read it

Reduction of Parameter Redundancy in Biaffine Classifiers with Symmetric and Circulant Weight Matrices
Currently, the biaffine classifier has been attracting attention as a me...
read it
Question Decomposition with Dependency Graphs
QDMR is a meaning representation for complex questions, which decomposes questions into a sequence of atomic steps. While stateoftheart QDMR parsers use the common sequencetosequence (seq2seq) approach, a QDMR structure fundamentally describes labeled relations between spans in the input question, and thus dependencybased approaches seem appropriate for this task. In this work, we present a QDMR parser that is based on dependency graphs (DGs), where nodes in the graph are words and edges describe logical relations that correspond to the different computation steps. We propose (a) a nonautoregressive graph parser, where all graph edges are computed simultaneously, and (b) a seq2seq parser that uses gold graph as auxiliary supervision. We find that a graph parser leads to a moderate reduction in performance (0.47 to 0.44), but to a 16x speedup in inference time due to the nonautoregressive nature of the parser, and to improved sample complexity compared to a seq2seq model. Second, a seq2seq model trained with auxiliary graph supervision has better generalization to new domains compared to a seq2seq model, and also performs better on questions with long sequences of computation steps.
READ FULL TEXT
Comments
There are no comments yet.