On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

03/12/2021
by   Benjamin Zi Hao Zhao, et al.
0

With an increase in low-cost machine learning APIs, advanced machine learning models may be trained on private datasets and monetized by providing them as a service. However, privacy researchers have demonstrated that these models may leak information about records in the training dataset via membership inference attacks. In this paper, we take a closer look at another inference attack reported in literature, called attribute inference, whereby an attacker tries to infer missing attributes of a partially known record used in the training dataset by accessing the machine learning model as an API. We show that even if a classification model succumbs to membership inference attacks, it is unlikely to be susceptible to attribute inference attacks. We demonstrate that this is because membership inference attacks fail to distinguish a member from a nearby non-member. We call the ability of an attacker to distinguish the two (similar) vectors as strong membership inference. We show that membership inference attacks cannot infer membership in this strong setting, and hence inferring attributes is infeasible. However, under a relaxed notion of attribute inference, called approximate attribute inference, we show that it is possible to infer attributes close to the true attributes. We verify our results on three publicly available datasets, five membership, and three attribute inference attacks reported in literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/28/2019

On Inferring Training Data Attributes in Machine Learning Models

A number of recent works have demonstrated that API access to machine le...
research
03/31/2022

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

We introduce a new class of attacks on machine learning models. We show ...
research
06/02/2019

Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning

A membership inference attack (MIA) against a machine learning model ena...
research
05/16/2020

DAMIA: Leveraging Domain Adaptation as a Defense against Membership Inference Attacks

Deep Learning (DL) techniques allow ones to train models from a dataset ...
research
04/24/2023

Human intuition as a defense against attribute inference

Attribute inference - the process of analyzing publicly available data i...
research
10/17/2022

Attribute Inference Attacks in Online Multiplayer Video Games: a Case Study on Dota2

Did you know that over 70 million of Dota2 players have their in-game da...
research
02/04/2021

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Inference attacks against Machine Learning (ML) models allow adversaries...

Please sign up or login with your details

Forgot password? Click here to reset