Federated Selective Aggregation for Knowledge Amalgamation

07/27/2022
by   Donglin Xie, et al.
0

In this paper, we explore a new knowledge-amalgamation problem, termed Federated Selective Aggregation (FedSA). The goal of FedSA is to train a student model for a new task with the help of several decentralized teachers, whose pre-training tasks and data are different and agnostic. Our motivation for investigating such a problem setup stems from a recent dilemma of model sharing. Many researchers or institutes have spent enormous resources on training large and competent networks. Due to the privacy, security, or intellectual property issues, they are, however, not able to share their own pre-trained models, even if they wish to contribute to the community. The proposed FedSA offers a solution to this dilemma and makes it one step further since, again, the learned student may specialize in a new task different from all of the teachers. To this end, we proposed a dedicated strategy for handling FedSA. Specifically, our student-training process is driven by a novel saliency-based approach that adaptively selects teachers as the participants and integrates their representative capabilities into the student. To evaluate the effectiveness of FedSA, we conduct experiments on both single-task and multi-task settings. Experimental results demonstrate that FedSA effectively amalgamates knowledge from decentralized models and achieves competitive performance to centralized baselines.

READ FULL TEXT
research
08/20/2019

Customizing Student Networks From Heterogeneous Teachers via Adaptive Knowledge Amalgamation

A massive number of well-trained deep networks have been released by dev...
research
05/28/2019

Amalgamating Filtered Knowledge: Learning Task-customized Student from Multi-task Teachers

Many well-trained Convolutional Neural Network(CNN) models have now been...
research
06/24/2019

Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning

An increasing number of well-trained deep networks have been released on...
research
02/01/2021

Decentralized Federated Learning Preserves Model and Data Privacy

The increasing complexity of IT systems requires solutions, that support...
research
06/20/2022

Decentralized Distributed Learning with Privacy-Preserving Data Synthesis

In the medical field, multi-center collaborations are often sought to yi...
research
07/12/2023

FDAPT: Federated Domain-adaptive Pre-training for Language Models

Combining Domain-adaptive Pre-training (DAPT) with Federated Learning (F...
research
03/20/2020

Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN

Recent advances in deep learning have provided procedures for learning o...

Please sign up or login with your details

Forgot password? Click here to reset