SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning

05/20/2022
by   Harsh Chaudhari, et al.
0

Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data. However, the datasets used for training ML models might be under the control of an adversary mounting a data poisoning attack, and MPC prevents inspecting training sets to detect poisoning. We show that multiple MPC frameworks for private ML training are susceptible to backdoor and targeted poisoning attacks. To mitigate this, we propose SafeNet, a framework for building ensemble models in MPC with formal guarantees of robustness to data poisoning attacks. We extend the security definition of private ML training to account for poisoning and prove that our SafeNet design satisfies the definition. We demonstrate SafeNet's efficiency, accuracy, and resilience to poisoning on several machine learning datasets and models. For instance, SafeNet reduces backdoor attack success from 100 model, while achieving 39x faster training and 36x less communication than the four-party MPC framework of Dalskov et al.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/02/2019

Secure Computation for Machine Learning With SPDZ

Secure Multi-Party Computation (MPC) is an area of cryptography that ena...
research
06/04/2021

Adam in Private: Secure and Fast Training of Deep Neural Networks with Adaptive Moment Estimation

Privacy-preserving machine learning (PPML) aims at enabling machine lear...
research
05/18/2022

Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution

Due to cost and time-to-market constraints, many industries outsource th...
research
02/15/2016

Secure Approximation Guarantee for Cryptographically Private Empirical Risk Minimization

Privacy concern has been increasingly important in many machine learning...
research
08/25/2022

SNAP: Efficient Extraction of Private Properties with Poisoning

Property inference attacks allow an adversary to extract global properti...
research
04/24/2020

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...
research
08/25/2017

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

Many of today's machine learning (ML) systems are not built from scratch...

Please sign up or login with your details

Forgot password? Click here to reset