Backdoor Attacks on Federated Meta-Learning

06/12/2020
by   Chien-Lun Chen, et al.
0

Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this paper, we analyze the effects of backdoor attacks in federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few training examples. While the ability to adapt could, in principle, make federated learning more robust to backdoor attacks when new training examples are benign, we find that even 1-shot poisoning attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the cosine similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, success and persistence of backdoor attacks are greatly reduced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2022

Preserving Privacy and Security in Federated Learning

Federated learning is known to be vulnerable to security and privacy iss...
research
11/26/2019

Federated Learning for Ranking Browser History Suggestions

Federated Learning is a new subfield of machine learning that allows fit...
research
06/24/2022

zPROBE: Zero Peek Robustness Checks for Federated Learning

Privacy-preserving federated learning allows multiple users to jointly t...
research
10/02/2019

Privacy-preserving Federated Brain Tumour Segmentation

Due to medical data privacy regulations, it is often infeasible to colle...
research
11/08/2021

BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination

In federated learning, each participant trains its local model with its ...
research
10/25/2021

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

Federated learning has quickly gained popularity with its promises of in...
research
03/31/2020

Inverting Gradients – How easy is it to break privacy in federated learning?

The idea of federated learning is to collaboratively train a neural netw...

Please sign up or login with your details

Forgot password? Click here to reset