BERT for Coreference Resolution: Baselines and Analysis

08/24/2019
by   Mandar Joshi, et al.
0

We apply BERT to coreference resolution, achieving strong improvements on the OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. A qualitative analysis of model predictions indicates that, compared to ELMo and BERT-base, BERT-large is particularly better at distinguishing between related but distinct entities (e.g., President and CEO). However, there is still room for improvement in modeling document-level context, conversations, and mention paraphrasing. Our code and models are publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2021

CEREC: A Corpus for Entity Resolution in Email Conversations

We present the first large scale corpus for entity resolution in email c...
research
01/09/2020

Resolving the Scope of Speculation and Negation using Transformer-Based Architectures

Speculation is a naturally occurring phenomena in textual data, forming ...
research
01/24/2019

A BERT Baseline for the Natural Questions

This technical note describes a new baseline for the Natural Questions. ...
research
09/18/2019

Enriching BERT with Knowledge Graph Embeddings for Document Classification

In this paper, we focus on the classification of books using short descr...
research
09/26/2019

Fine-tune Bert for DocRED with Two-step Process

Modelling relations between multiple entities has attracted increasing a...
research
06/01/2020

BERT-based Ensembles for Modeling Disclosure and Support in Conversational Social Media Text

There is a growing interest in understanding how humans initiate and hol...
research
08/02/2023

Bio+Clinical BERT, BERT Base, and CNN Performance Comparison for Predicting Drug-Review Satisfaction

The objective of this study is to develop natural language processing (N...

Please sign up or login with your details

Forgot password? Click here to reset