-
Semantic Role Labeling with Iterative Structure Refinement
Modern state-of-the-art Semantic Role Labeling (SRL) methods rely on exp...
read it
-
High-order Semantic Role Labeling
Semantic role labeling is primarily used to identify predicates, argumen...
read it
-
Hierarchical Multitask Learning with Dependency Parsing for Japanese Semantic Role Labeling Improves Performance of Argument Identification
With the advent of FrameNet and PropBank, many semantic role labeling (S...
read it
-
High-order Refining for End-to-end Chinese Semantic Role Labeling
Current end-to-end semantic role labeling is mostly accomplished via gra...
read it
-
CLAR: A Cross-Lingual Argument Regularizer for Semantic Role Labeling
Semantic role labeling (SRL) identifies predicate-argument structure(s) ...
read it
-
Distance-Free Modeling of Multi-Predicate Interactions in End-to-End Japanese Predicate-Argument Structure Analysis
Capturing interactions among multiple predicate-argument structures (PAS...
read it
-
Modelling Interaction of Sentence Pair with coupled-LSTMs
Recently, there is rising interest in modelling the interactions of two ...
read it
Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks
Semantic role labeling (SRL) involves extracting propositions (i.e. predicates and their typed arguments) from natural language sentences. State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not model non-local interaction between arguments. We propose a new approach to modeling these interactions while maintaining efficient inference. Specifically, we use Capsule Networks: each proposition is encoded as a tuple of capsules, one capsule per argument type (i.e. role). These tuples serve as embeddings of entire propositions. In every network layer, the capsules interact with each other and with representations of words in the sentence. Each iteration results in updated proposition embeddings and updated predictions about the SRL structure. Our model substantially outperforms the non-refinement baseline model on all 7 CoNLL-2019 languages and achieves state-of-the-art results on 5 languages (including English) for dependency SRL. We analyze the types of mistakes corrected by the refinement procedure. For example, each role is typically (but not always) filled with at most one argument. Whereas enforcing this approximate constraint is not useful with the modern SRL system, iterative procedure corrects the mistakes by capturing this intuition in a flexible and context-sensitive way.
READ FULL TEXT
Comments
There are no comments yet.