MergeDistill: Merging Pre-trained Language Models using Distillation

06/05/2021
by   Simran Khanuja, et al.
0

Pre-trained multilingual language models (LMs) have achieved state-of-the-art results in cross-lingual transfer, but they often lead to an inequitable representation of languages due to limited capacity, skewed pre-training data, and sub-optimal vocabularies. This has prompted the creation of an ever-growing pre-trained model universe, where each model is trained on large amounts of language or domain specific data with a carefully curated, linguistically informed vocabulary. However, doing so brings us back full circle and prevents one from leveraging the benefits of multilinguality. To address the gaps at both ends of the spectrum, we propose MergeDistill, a framework to merge pre-trained LMs in a way that can best leverage their assets with minimal dependencies, using task-agnostic knowledge distillation. We demonstrate the applicability of our framework in a practical setting by leveraging pre-existing teacher LMs and training student LMs that perform competitively with or even outperform teacher LMs trained on several orders of magnitude more data and with a fixed model capacity. We also highlight the importance of teacher selection and its impact on student model performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2023

HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers

Knowledge distillation has been shown to be a powerful model compression...
research
05/26/2023

A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models

Distillation from Weak Teacher (DWT) is a method of transferring knowled...
research
10/11/2022

From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models

Investigating better ways to reuse the released pre-trained language mod...
research
12/02/2021

Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation

Personalized news recommendation has been widely adopted to improve user...
research
01/21/2021

Distilling Large Language Models into Tiny and Effective Students using pQRNN

Large pre-trained multilingual models like mBERT, XLM-R achieve state of...
research
12/14/2021

Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models

As many fine-tuned pre-trained language models (PLMs) with promising per...
research
05/13/2023

AMTSS: An Adaptive Multi-Teacher Single-Student Knowledge Distillation Framework For Multilingual Language Inference

Knowledge distillation is of key importance to launching multilingual pr...

Please sign up or login with your details

Forgot password? Click here to reset