Property Inference From Poisoning

01/26/2021
by   Melissa Chase, et al.
0

Property inference attacks consider an adversary who has access to the trained model and tries to extract some global statistics of the training data. In this work, we study property inference in scenarios where the adversary can maliciously control part of the training data (poisoning data) with the goal of increasing the leakage. Previous work on poisoning attacks focused on trying to decrease the accuracy of models either on the whole population or on specific sub-populations or instances. Here, for the first time, we study poisoning attacks where the goal of the adversary is to increase the information leakage of the model. Our findings suggest that poisoning attacks can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications where some of the data sources may be malicious. We describe our property inference poisoning attack that allows the adversary to learn the prevalence in the training data of any property it chooses. We theoretically prove that our attack can always succeed as long as the learning algorithm used has good generalization properties. We then verify the effectiveness of our attack by experimentally evaluating it on two datasets: a Census dataset and the Enron email dataset. We were able to achieve above 90% attack accuracy with 9-10% poisoning in all of our experiments.

READ FULL TEXT
research
09/18/2022

Distribution inference risks: Identifying and mitigating sources of leakage

A large body of work shows that machine learning (ML) models can leak se...
research
06/07/2022

Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?

In privacy-preserving machine learning, it is common that the owner of t...
research
09/10/2018

Multi-party Poisoning through Generalized p-Tampering

In a poisoning attack against a learning algorithm, an adversary tampers...
research
11/16/2021

Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair

More capable language models increasingly saturate existing task benchma...
research
02/27/2018

Leakage and Protocol Composition in a Game-Theoretic Perspective

In the inference attacks studied in Quantitative Information Flow (QIF),...
research
09/05/2023

The Adversarial Implications of Variable-Time Inference

Machine learning (ML) models are known to be vulnerable to a number of a...
research
09/15/2023

HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks

While numerous defense methods have been proposed to prohibit potential ...

Please sign up or login with your details

Forgot password? Click here to reset