ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

02/04/2021
by   Yugeng Liu, et al.
0

Inference attacks against Machine Learning (ML) models allow adversaries to learn information about training data, model parameters, etc. While researchers have studied these attacks thoroughly, they have done so in isolation. We lack a comprehensive picture of the risks caused by the attacks, such as the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of defense techniques. In this paper, we fill this gap by presenting a first-of-its-kind holistic risk assessment of different inference attacks against machine learning models. We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing - and establish a threat model taxonomy. Our extensive experimental evaluation conducted over five model architectures and four datasets shows that the complexity of the training dataset plays an important role with respect to the attack's performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated. We also show that defenses like DP-SGD and Knowledge Distillation can only hope to mitigate some of the inference attacks. Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models, and equally serves as a benchmark tool for researchers and practitioners.

READ FULL TEXT

page 7

page 8

research
12/15/2022

Holistic risk assessment of inference attacks in machine learning

As machine learning expanding application, there are more and more unign...
research
06/04/2018

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Machine learning (ML) has become a core component of many real-world app...
research
09/10/2020

Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

While being deployed in many critical applications as core components, m...
research
03/12/2021

On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

With an increase in low-cost machine learning APIs, advanced machine lea...
research
05/10/2023

DPMLBench: Holistic Evaluation of Differentially Private Machine Learning

Differential privacy (DP), as a rigorous mathematical definition quantif...
research
10/25/2021

The Efficiency Misnomer

Model efficiency is a critical aspect of developing and deploying machin...
research
12/20/2019

Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation

Today's success of state of the art methods for semantic segmentation is...

Please sign up or login with your details

Forgot password? Click here to reset