Backdoor Attacks on Federated Meta-Learning

by   Chien-Lun Chen, et al.

Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this paper, we analyze the effects of backdoor attacks in federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few training examples. While the ability to adapt could, in principle, make federated learning more robust to backdoor attacks when new training examples are benign, we find that even 1-shot poisoning attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the cosine similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, success and persistence of backdoor attacks are greatly reduced.



page 1

page 2

page 3

page 4


Preserving Privacy and Security in Federated Learning

Federated learning is known to be vulnerable to security and privacy iss...

Federated Learning for Ranking Browser History Suggestions

Federated Learning is a new subfield of machine learning that allows fit...

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...

Privacy-preserving Federated Brain Tumour Segmentation

Due to medical data privacy regulations, it is often infeasible to colle...

BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination

In federated learning, each participant trains its local model with its ...

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

Federated learning has quickly gained popularity with its promises of in...

Federated Learning of User Verification Models Without Sharing Embeddings

We consider the problem of training User Verification (UV) models in fed...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.