DeepAI
Log In Sign Up

Higher-order Coreference Resolution with Coarse-to-fine Inference

04/15/2018
by   Kenton Lee, et al.
0

We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/18/2019

Speeding up convolutional networks pruning with coarse ranking

Channel-based pruning has achieved significant successes in accelerating...
12/29/2021

Learning Higher-Order Programs without Meta-Interpretive Learning

Learning complex programs through inductive logic programming (ILP) rema...
09/09/2021

Word-Level Coreference Resolution

Recent coreference resolution models rely heavily on span representation...
09/25/2020

Revealing the Myth of Higher-Order Inference in Coreference Resolution

This paper analyzes the impact of higher-order inference (HOI) on the ta...
02/19/2018

Multi-resolution Tensor Learning for Large-Scale Spatial Data

High-dimensional tensor models are notoriously computationally expensive...
11/21/2019

A Cluster Ranking Model for Full Anaphora Resolution

Anaphora resolution (coreference) systems designed for the CONLL 2012 da...
09/15/2014

Speeding-up Graphical Model Optimization via a Coarse-to-fine Cascade of Pruning Classifiers

We propose a general and versatile framework that significantly speeds-u...