Linguistically Informed Masking for Representation Learning in the Patent Domain

by   Sophia Althammer, et al.

Domain-specific contextualized language models have demonstrated substantial effectiveness gains for domain-specific downstream tasks, like similarity matching, entity recognition or information retrieval. However successfully applying such models in highly specific language domains requires domain adaptation of the pre-trained models. In this paper we propose the empirically motivated Linguistically Informed Masking (LIM) method to focus domain-adaptative pre-training on the linguistic patterns of patents, which use a highly technical sublanguage. We quantify the relevant differences between patent, scientific and general-purpose language and demonstrate for two different language models (BERT and SciBERT) that domain adaptation with LIM leads to systematically improved representations by evaluating the performance of the domain-adapted representations of patent language on two independent downstream tasks, the IPC classification and similarity matching. We demonstrate the impact of balancing the learning from different information sources during domain adaptation for the patent domain. We make the source code as well as the domain-adaptive pre-trained patent language models publicly available at



There are no comments yet.


page 1

page 2

page 3

page 4


JuriBERT: A Masked-Language Model Adaptation for French Legal Text

Language models have proven to be very useful when adapted to specific d...

BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets

We introduce BERTweetFR, the first large-scale pre-trained language mode...

BioMegatron: Larger Biomedical Domain Language Model

There has been an influx of biomedical domain-specific language models, ...

Temporal Effects on Pre-trained Models for Language Processing Tasks

Keeping the performance of language technologies optimal as time passes ...

Automated Synthetic-to-Real Generalization

Models trained on synthetic images often face degraded generalization to...

The Trade-offs of Domain Adaptation for Neural Language Models

In this paper, we connect language model adaptation with concepts of mac...

VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

Variable names are critical for conveying intended program behavior. Mac...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.