DeepAI AI Chat
Log In Sign Up

Robust Federated Training via Collaborative Machine Teaching using Trusted Instances

by   Yufei Han, et al.

Federated learning performs distributed model training using local data hosted by agents. It shares only model parameter updates for iterative aggregation at the server. Although it is privacy-preserving by design, federated learning is vulnerable to noise corruption of local agents, as demonstrated in the previous study on adversarial data poisoning threat against federated learning systems. Even a single noise-corrupted agent can bias the model training. In our work, we propose a collaborative and privacy-preserving machine teaching paradigm with multiple distributed teachers, to improve robustness of the federated training process against local data corruption. We assume that each local agent (teacher) have the resources to verify a small portions of trusted instances, which may not by itself be adequate for learning. In the proposed collaborative machine teaching method, these trusted instances guide the distributed agents to jointly select a compact while informative training subset from data hosted by their own. Simultaneously, the agents learn to add changes of limited magnitudes into the selected data instances, in order to improve the testing performances of the federally trained model despite of the training data corruption. Experiments on toy and real data demonstrate that our approach can identify training set bugs effectively and suggest appropriate changes to the labels. Our algorithm is a step toward trustworthy machine learning.


page 1

page 2

page 3

page 4


PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments

We propose and implement a Privacy-preserving Federated Learning (PPFL) ...

Training Set Debugging Using Trusted Items

Training set bugs are flaws in the data that adversely affect machine le...

Collaborative and Privacy-Preserving Machine Teaching via Consensus Optimization

In this work, we define a collaborative and privacy-preserving machine t...

Analyzing Federated Learning through an Adversarial Lens

Federated learning distributes model training among a multitude of agent...

Reaching Data Confidentiality and Model Accountability on the CalTrain

Distributed collaborative learning (DCL) paradigms enable building joint...

Privacy Preserving Stochastic Channel-Based Federated Learning with Neural Network Pruning

Artificial neural network has achieved unprecedented success in a wide v...

Simeon – Secure Federated Machine Learning Through Iterative Filtering

Federated learning enables a global machine learning model to be trained...