Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization

08/07/2023
by   Xiachong Feng, et al.
0

Meeting summarization has emerged as a promising technique for providing users with condensed summaries. However, existing work has focused on training models on centralized data, neglecting real-world scenarios where meeting data are infeasible to collect centrally, due to their sensitive nature. This gap motivates us to explore federated learning for meeting summarization. Two critical challenges impede progress. First, state-of-the-art summarizers are based on parameter-heavy pre-trained models. Exchanging such a model's parameters across clients imposes large bandwidth costs. Second, as real-world meeting data belong to various domains and are distributed across clients, they are instances of non-identically and independently distributed (non-IID). IID assumptions do not hold, which changes which forms of learning algorithms best apply. To address this, we propose Adapter-based Federated Selective Knowledge Distillation (AdaFedSelecKD) for training performant client models. Specifically, we develop an adapter-based summarization model where two adapters cooperatively facilitate learning using fewer parameters to reduce communication costs. Then, we devise a selective knowledge distillation strategy, assisting clients in robustly handling domain-focused modelling on their own data, while leveraging global parameters based on non-IID data. Extensive experiments on the QMSum benchmark demonstrate AdaFedSelecKD can achieve comparable performance with powerful centralized training methods, and shows its generalizability and robustness.

READ FULL TEXT

page 1

page 4

research
06/26/2023

Federated Learning on Non-iid Data via Local and Global Distillation

Most existing federated learning algorithms are based on the vanilla Fed...
research
08/30/2021

FedKD: Communication Efficient Federated Learning via Knowledge Distillation

Federated learning is widely used to learn intelligent models from decen...
research
01/10/2022

FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks

Applying knowledge distillation to personalized cross-silo federated lea...
research
12/02/2021

FedRAD: Federated Robust Adaptive Distillation

The robustness of federated learning (FL) is vital for the distributed t...
research
12/05/2022

FedUKD: Federated UNet Model with Knowledge Distillation for Land Use Classification from Satellite and Street Views

Federated Deep Learning frameworks can be used strategically to monitor ...
research
10/20/2020

Asynchronous Edge Learning using Cloned Knowledge Distillation

With the increasing demand for more and more data, the federated learnin...
research
10/28/2021

Towards Model Agnostic Federated Learning Using Knowledge Distillation

An often unquestioned assumption underlying most current federated learn...

Please sign up or login with your details

Forgot password? Click here to reset