CoreLM: Coreference-aware Language Model Fine-Tuning

11/04/2021
by   Nikolaos Stylianou, et al.
0

Language Models are the underpin of all modern Natural Language Processing (NLP) tasks. The introduction of the Transformers architecture has contributed significantly into making Language Modeling very effective across many NLP task, leading to significant advancements in the field. However, Transformers come with a big computational cost, which grows quadratically with respect to the input length. This presents a challenge as to understand long texts requires a lot of context. In this paper, we propose a Fine-Tuning framework, named CoreLM, that extends the architecture of current Pretrained Language Models so that they incorporate explicit entity information. By introducing entity representations, we make available information outside the contextual space of the model, which results in a better Language Model for a fraction of the computational cost. We implement our approach using GPT2 and compare the fine-tuned model to the original. Our proposed model achieves a lower Perplexity in GUMBY and LAMBDADA datasets when compared to GPT2 and a fine-tuned version of GPT2 without any changes. We also compare the models' performance in terms of Accuracy in LAMBADA and Children's Book Test, with and without the use of model-created coreference annotations.

READ FULL TEXT
research
05/25/2021

NukeLM: Pre-Trained and Fine-Tuned Language Models for the Nuclear and Energy Domains

Natural language processing (NLP) tasks (text classification, named enti...
research
08/07/2023

WIKITIDE: A Wikipedia-Based Timestamped Definition Pairs Dataset

A fundamental challenge in the current NLP context, dominated by languag...
research
11/01/2022

Preserving In-Context Learning ability in Large Language Model Fine-tuning

Pretrained large language models (LLMs) are strong in-context learners t...
research
09/26/2022

Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding

This paper explores the task of Temporal Video Grounding (TVG) where, gi...
research
04/15/2022

Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models

With many real-world applications of Natural Language Processing (NLP) c...
research
08/30/2022

Efficient and Interpretable Neural Models for Entity Tracking

What would it take for a natural language model to understand a novel, s...

Please sign up or login with your details

Forgot password? Click here to reset