Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks

06/15/2019
by   Felipe A. Mejia, et al.
5

Adversarial training was introduced as a way to improve the robustness of deep learning models to adversarial attacks. This training method improves robustness against adversarial attacks, but increases the models vulnerability to privacy attacks. In this work we demonstrate how model inversion attacks, extracting training data directly from the model, previously thought to be intractable become feasible when attacking a robustly trained model. The input space for a traditionally trained model is dominated by adversarial examples - data points that strongly activate a certain class but lack semantic meaning - this makes it difficult to successfully conduct model inversion attacks. We demonstrate this effect using the CIFAR-10 dataset under three different model inversion attacks, a vanilla gradient descent method, gradient based method at different scales, and a generative adversarial network base attacks.

READ FULL TEXT

page 2

page 5

page 7

page 8

page 9

research
05/19/2017

Ensemble Adversarial Training: Attacks and Defenses

Machine learning models are vulnerable to adversarial examples, inputs m...
research
05/25/2019

Resisting Adversarial Attacks by k-Winners-Take-All

We propose a simple change to the current neural network structure for d...
research
04/25/2022

A Simple Structure For Building A Robust Model

As deep learning applications, especially programs of computer vision, a...
research
01/12/2022

Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data

The past decade has seen a rapid adoption of Artificial Intelligence (AI...
research
09/10/2020

Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent

Adversarial training, especially projected gradient descent (PGD), has b...
research
01/29/2023

Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering

Most existing methods to detect backdoored machine learning (ML) models ...
research
10/08/2020

Improved Techniques for Model Inversion Attacks

Model inversion (MI) attacks in the whitebox setting are aimed at recons...

Please sign up or login with your details

Forgot password? Click here to reset