What's in a Name? Are BERT Named Entity Representations just as Good for any other Name?

07/14/2020
by   Sriram Balasubramanian, et al.
0

We evaluate named entity representations of BERT-based NLP models by investigating their robustness to replacements from the same typed class in the input. We highlight that on several tasks while such perturbations are natural, state of the art trained models are surprisingly brittle. The brittleness continues even with the recent entity-aware BERT models. We also try to discern the cause of this non-robustness, considering factors such as tokenization and frequency of occurrence. Then we provide a simple method that ensembles predictions from multiple replacements while jointly modeling the uncertainty of type annotations and label predictions. Experiments on three NLP tasks show that our method enhances robustness and increases accuracy on both natural and adversarial datasets.

READ FULL TEXT
research
08/27/2020

GREEK-BERT: The Greeks visiting Sesame Street

Transformer-based language models, such as BERT and its variants, have a...
research
08/09/2022

Effects of Annotations' Density on Named Entity Recognition Models' Performance in the Context of African Languages

African languages have recently been the subject of several studies in N...
research
05/23/2023

On Robustness of Finetuned Transformer-based NLP Models

Transformer-based pretrained models like BERT, GPT-2 and T5 have been fi...
research
09/01/2019

Pre-training of Deep Contextualized Embeddings of Words and Entities for Named Entity Disambiguation

Deep contextualized embeddings trained using unsupervised language model...
research
04/09/2020

Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve

Named Entity Recognition systems achieve remarkable performance on domai...
research
05/02/2022

BERTops: Studying BERT Representations under a Topological Lens

Proposing scoring functions to effectively understand, analyze and learn...
research
02/06/2023

Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks

In tasks like node classification, image segmentation, and named-entity ...

Please sign up or login with your details

Forgot password? Click here to reset