Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models

12/14/2021
by   Lei Li, et al.
7

As many fine-tuned pre-trained language models (PLMs) with promising performance are generously released, investigating better ways to reuse these models is vital as it can greatly reduce the retraining computational cost and the potential environmental side-effects. In this paper, we explore a novel model reuse paradigm, Knowledge Amalgamation (KA) for PLMs. Without human annotations available, KA aims to merge the knowledge from different teacher-PLMs, each of which specializes in a different classification problem, into a versatile student model. The achieve this, we design a Model Uncertainty–aware Knowledge Amalgamation (MUKA) framework, which identifies the potential adequate teacher using Monte-Carlo Dropout for approximating the golden supervision to guide the student. Experimental results demonstrate that MUKA achieves substantial improvements over baselines on benchmark datasets. Further analysis shows that MUKA can generalize well under several complicate settings with multiple teacher models, heterogeneous teachers, and even cross-dataset teachers.

READ FULL TEXT
research
10/11/2022

From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models

Investigating better ways to reuse the released pre-trained language mod...
research
05/23/2023

Improving Heterogeneous Model Reuse by Density Estimation

This paper studies multiparty learning, aiming to learn a model using th...
research
06/05/2021

MergeDistill: Merging Pre-trained Language Models using Distillation

Pre-trained multilingual language models (LMs) have achieved state-of-th...
research
08/20/2019

Customizing Student Networks From Heterogeneous Teachers via Adaptive Knowledge Amalgamation

A massive number of well-trained deep networks have been released by dev...
research
05/28/2021

Knowledge Inheritance for Pre-trained Language Models

Recent explorations of large-scale pre-trained language models (PLMs) su...
research
06/24/2019

Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning

An increasing number of well-trained deep networks have been released on...
research
04/23/2019

Student Becoming the Master: Knowledge Amalgamation for Joint Scene Parsing, Depth Estimation, and More

In this paper, we investigate a novel deep-model reusing task. Our goal ...

Please sign up or login with your details

Forgot password? Click here to reset