Linguistically Informed Masking for Representation Learning in the Patent Domain

06/10/2021
by   Sophia Althammer, et al.
1

Domain-specific contextualized language models have demonstrated substantial effectiveness gains for domain-specific downstream tasks, like similarity matching, entity recognition or information retrieval. However successfully applying such models in highly specific language domains requires domain adaptation of the pre-trained models. In this paper we propose the empirically motivated Linguistically Informed Masking (LIM) method to focus domain-adaptative pre-training on the linguistic patterns of patents, which use a highly technical sublanguage. We quantify the relevant differences between patent, scientific and general-purpose language and demonstrate for two different language models (BERT and SciBERT) that domain adaptation with LIM leads to systematically improved representations by evaluating the performance of the domain-adapted representations of patent language on two independent downstream tasks, the IPC classification and similarity matching. We demonstrate the impact of balancing the learning from different information sources during domain adaptation for the patent domain. We make the source code as well as the domain-adaptive pre-trained patent language models publicly available at https://github.com/sophiaalthammer/patent-lim.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2022

KALA: Knowledge-Augmented Language Model Adaptation

Pre-trained language models (PLMs) have achieved remarkable success on v...
research
05/22/2023

TADA: Efficient Task-Agnostic Domain Adaptation for Transformers

Intermediate training of pre-trained transformer-based language models o...
research
11/06/2022

On the Domain Adaptation and Generalization of Pretrained Language Models: A Survey

Recent advances in NLP are brought by a range of large-scale pretrained ...
research
06/05/2023

Stack Over-Flowing with Results: The Case for Domain-Specific Pre-Training Over One-Size-Fits-All Models

Large pre-trained neural language models have brought immense progress t...
research
09/13/2023

TAP: Targeted Prompting for Task Adaptive Generation of Textual Training Instances for Visual Classification

Vision and Language Models (VLMs), such as CLIP, have enabled visual rec...
research
01/21/2023

Adapting a Language Model While Preserving its General Knowledge

Domain-adaptive pre-training (or DA-training for short), also known as p...
research
03/01/2023

Domain-adapted large language models for classifying nuclear medicine reports

With the growing use of transformer-based language models in medicine, i...

Please sign up or login with your details

Forgot password? Click here to reset