CABACE: Injecting Character Sequence Information and Domain Knowledge for Enhanced Acronym and Long-Form Extraction

12/25/2021
by   Nithish Kannen, et al.
0

Acronyms and long-forms are commonly found in research documents, more so in documents from scientific and legal domains. Many acronyms used in such documents are domain-specific and are very rarely found in normal text corpora. Owing to this, transformer-based NLP models often detect OOV (Out of Vocabulary) for acronym tokens, especially for non-English languages, and their performance suffers while linking acronyms to their long forms during extraction. Moreover, pretrained transformer models like BERT are not specialized to handle scientific and legal documents. With these points being the overarching motivation behind this work, we propose a novel framework CABACE: Character-Aware BERT for ACronym Extraction, which takes into account character sequences in text and is adapted to scientific and legal domains by masked language modelling. We further use an objective with an augmented loss function, adding the max loss and mask loss terms to the standard cross-entropy loss for training CABACE. We further leverage pseudo labelling and adversarial data generation to improve the generalizability of the framework. Experimental results prove the superiority of the proposed framework in comparison to various baselines. Additionally, we show that the proposed framework is better suited than baseline models for zero-shot generalization to non-English languages, thus reinforcing the effectiveness of our approach. Our team BacKGProp secured the highest scores on the French dataset, second-highest on Danish and Vietnamese, and third-highest in the English-Legal dataset on the global leaderboard for the acronym extraction (AE) shared task at SDU AAAI-22.

READ FULL TEXT
research
12/16/2022

LegalRelectra: Mixed-domain Language Modeling for Long-range Legal Text Comprehension

The application of Natural Language Processing (NLP) to specialized doma...
research
06/30/2022

Domain Adaptive Pretraining for Multilingual Acronym Extraction

This paper presents our findings from participating in the multilingual ...
research
10/15/2022

AraLegal-BERT: A pretrained language model for Arabic Legal text

The effectiveness of the BERT model on multiple linguistic tasks has bee...
research
11/30/2022

BudgetLongformer: Can we Cheaply Pretrain a SotA Legal Language Model From Scratch?

Pretrained transformer models have achieved state-of-the-art results in ...
research
11/01/2019

BERT Goes to Law School: Quantifying the Competitive Advantage of Access to Large Legal Corpora in Contract Understanding

Fine-tuning language models, such as BERT, on domain specific corpora ha...
research
05/11/2022

Query-Based Keyphrase Extraction from Long Documents

Transformer-based architectures in natural language processing force inp...
research
05/12/2021

Better than BERT but Worse than Baseline

This paper compares BERT-SQuAD and Ab3P on the Abbreviation Definition I...

Please sign up or login with your details

Forgot password? Click here to reset