WAFFLe: Weight Anonymized Factorization for Federated Learning

08/13/2020
by   Weituo Hao, et al.
16

In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices. In light of this need, federated learning has emerged as a popular training paradigm. However, many federated learning approaches trade transmitting data for communicating updated weight parameters for each local device. Therefore, a successful breach that would have otherwise directly compromised the data instead grants whitebox access to the local model, which opens the door to a number of attacks, including exposing the very data federated learning seeks to protect. Additionally, in distributed scenarios, individual client devices commonly exhibit high statistical heterogeneity. Many common federated approaches learn a single global model; while this may do well on average, performance degrades when the i.i.d. assumption is violated, underfitting individuals further from the mean, and raising questions of fairness. To address these issues, we propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks. Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe's significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.

READ FULL TEXT
research
06/02/2018

Federated Learning with Non-IID Data

Federated learning enables resource-constrained edge compute devices, su...
research
04/27/2021

Towards Fair Federated Learning with Zero-Shot Data Augmentation

Federated learning has emerged as an important distributed learning para...
research
01/06/2020

Think Locally, Act Globally: Federated Learning with Local and Global Representations

Federated learning is an emerging research paradigm to train models on p...
research
08/26/2022

Abnormal Local Clustering in Federated Learning

Federated learning is a model for privacy without revealing private data...
research
10/18/2022

FedForgery: Generalized Face Forgery Detection with Residual Federated Learning

With the continuous development of deep learning in the field of image g...
research
05/03/2021

OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in Distributed Learning

The diversity and quantity of the data warehousing, gathering data from ...
research
11/03/2022

Fairness in Federated Learning via Core-Stability

Federated learning provides an effective paradigm to jointly optimize a ...

Please sign up or login with your details

Forgot password? Click here to reset