AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity Matching using Adapter-tuning

05/30/2023
by   John Bosco Mugeni, et al.
0

Entity Matching (EM) involves identifying different data representations referring to the same entity from multiple data sources and is typically formulated as a binary classification problem. It is a challenging problem in data integration due to the heterogeneity of data representations. State-of-the-art solutions have adopted NLP techniques based on pre-trained language models (PrLMs) via the fine-tuning paradigm, however, sequential fine-tuning of overparameterized PrLMs can lead to catastrophic forgetting, especially in low-resource scenarios. In this study, we propose a parameter-efficient paradigm for fine-tuning PrLMs based on adapters, small neural networks encapsulated between layers of a PrLM, by optimizing only the adapter and classifier weights while the PrLMs parameters are frozen. Adapter-based methods have been successfully applied to multilingual speech problems achieving promising results, however, the effectiveness of these methods when applied to EM is not yet well understood, particularly for generalized EM with heterogeneous data. Furthermore, we explore using (i) pre-trained adapters and (ii) invertible adapters to capture token-level language representations and demonstrate their benefits for transfer learning on the generalized EM benchmark. Our results show that our solution achieves comparable or superior performance to full-scale PrLM fine-tuning and prompt-tuning baselines while utilizing a significantly smaller computational footprint ≈ 13% of the PrLM parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2023

One Adapter for All Programming Languages? Adapter Tuning for Code Search and Summarization

As pre-trained models automate many code intelligence tasks, a widely us...
research
01/27/2023

Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning

As the size of the pre-trained language model (PLM) continues to increas...
research
01/16/2023

Multimodal Side-Tuning for Document Classification

In this paper, we propose to exploit the side-tuning framework for multi...
research
11/02/2022

Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming

Transfer learning (TL) approaches have shown promising results when hand...
research
12/31/2019

Side-Tuning: Network Adaptation via Additive Side Networks

When training a neural network for a desired task, one may prefer to ada...
research
11/16/2022

Towards Robust Low-Resource Fine-Tuning with Multi-View Compressed Representations

Due to the huge amount of parameters, fine-tuning of pretrained language...
research
10/24/2022

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning

Delta tuning (DET, also known as parameter-efficient tuning) is deemed a...

Please sign up or login with your details

Forgot password? Click here to reset