DAMIA: Leveraging Domain Adaptation as a Defense against Membership Inference Attacks

by   Hongwei Huang, et al.

Deep Learning (DL) techniques allow ones to train models from a dataset to solve tasks. DL has attracted much interest given its fancy performance and potential market value, while security issues are amongst the most colossal concerns. However, the DL models may be prone to the membership inference attack, where an attacker determines whether a given sample is from the training dataset. Efforts have been made to hinder the attack but unfortunately, they may lead to a major overhead or impaired usability. In this paper, we propose and implement DAMIA, leveraging Domain Adaptation (DA) as a defense aginist membership inference attacks. Our observation is that during the training process, DA obfuscates the dataset to be protected using another related dataset, and derives a model that underlyingly extracts the features from both datasets. Seeing that the model is obfuscated, membership inference fails, while the extracted features provide supports for usability. Extensive experiments have been conducted to validates our intuition. The model trained by DAMIA has a negligible footprint to the usability. Our experiment also excludes factors that may hinder the performance of DAMIA, providing a potential guideline to vendors and researchers to benefit from our solution in a timely manner.


page 5

page 11

page 13


Membership Inference Attacks Against Object Detection Models

Machine learning models can leak information about the dataset they trai...

On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

With an increase in low-cost machine learning APIs, advanced machine lea...

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

In a membership inference attack, an attacker aims to infer whether a da...

An Efficient Subpopulation-based Membership Inference Attack

Membership inference attacks allow a malicious entity to predict whether...

Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference

Membership inference (MI) attacks exploit a learned model's lack of gene...

Towards More Realistic Membership Inference Attacks on Large Diffusion Models

Generative diffusion models, including Stable Diffusion and Midjourney, ...

Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models

Deep learning (DL) models, especially those large-scale and high-perform...

Please sign up or login with your details

Forgot password? Click here to reset