Number Entity Recognition

05/07/2022
by   Dhanasekar Sundararaman, et al.
0

Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed. Though numbers are typically not accounted for distinctly in most NLP tasks, there is still an underlying amount of numeracy already exhibited by NLP models. In this work, we attempt to tap this potential of state-of-the-art NLP models and transfer their ability to boost performance in related tasks. Our proposed classification of numbers into entities helps NLP models perform well on several tasks, including a handcrafted Fill-In-The-Blank (FITB) task and on question answering using joint embeddings, outperforming the BERT and RoBERTa baseline classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2020

GREEK-BERT: The Greeks visiting Sesame Street

Transformer-based language models, such as BERT and its variants, have a...
research
09/17/2019

Do NLP Models Know Numbers? Probing Numeracy in Embeddings

The ability to understand and work with numbers (numeracy) is critical f...
research
06/06/2020

A Cross-Task Analysis of Text Span Representations

Many natural language processing (NLP) tasks involve reasoning with text...
research
10/07/2022

Distillation-Resistant Watermarking for Model Protection in NLP

How can we protect the intellectual property of trained NLP models? Mode...
research
03/24/2021

Representing Numbers in NLP: a Survey and a Vision

NLP systems rarely give special consideration to numbers found in text. ...
research
06/16/2023

Pushing the Limits of ChatGPT on NLP Tasks

Despite the success of ChatGPT, its performances on most NLP tasks are s...
research
02/03/2021

Memorization vs. Generalization: Quantifying Data Leakage in NLP Performance Evaluation

Public datasets are often used to evaluate the efficacy and generalizabi...

Please sign up or login with your details

Forgot password? Click here to reset