Holistic risk assessment of inference attacks in machine learning

12/15/2022
by   Yang Yang, et al.
0

As machine learning expanding application, there are more and more unignorable privacy and safety issues. Especially inference attacks against Machine Learning models allow adversaries to infer sensitive information about the target model, such as training data, model parameters, etc. Inference attacks can lead to serious consequences, including violating individuals privacy, compromising the intellectual property of the owner of the machine learning model. As far as concerned, researchers have studied and analyzed in depth several types of inference attacks, albeit in isolation, but there is still a lack of a holistic rick assessment of inference attacks against machine learning models, such as their application in different scenarios, the common factors affecting the performance of these attacks and the relationship among the attacks. As a result, this paper performs a holistic risk assessment of different inference attacks against Machine Learning models. This paper focuses on three kinds of representative attacks: membership inference attack, attribute inference attack and model stealing attack. And a threat model taxonomy is established. A total of 12 target models using three model architectures, including AlexNet, ResNet18 and Simple CNN, are trained on four datasets, namely CelebA, UTKFace, STL10 and FMNIST.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2021

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Inference attacks against Machine Learning (ML) models allow adversaries...
research
06/16/2022

I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences

Machine Learning-as-a-Service (MLaaS) has become a widespread paradigm, ...
research
09/30/2022

ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

Early backdoor attacks against machine learning set off an arms race in ...
research
06/29/2020

Reducing Risk of Model Inversion Using Privacy-Guided Training

Machine learning models often pose a threat to the privacy of individual...
research
11/09/2020

Risk Assessment for Machine Learning Models

In this paper we propose a framework for assessing the risk associated w...
research
12/16/2021

Dataset correlation inference attacks against machine learning models

Machine learning models are increasingly used by businesses and organiza...
research
10/20/2022

How Does a Deep Learning Model Architecture Impact Its Privacy?

As a booming research area in the past decade, deep learning technologie...

Please sign up or login with your details

Forgot password? Click here to reset