Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest

09/27/2019
by   Indu Ilanchezian, et al.
4

In this paper we investigate the usage of adversarial perturbations for the purpose of privacy from human perception and model (machine) based detection. We employ adversarial perturbations for obfuscating certain variables in raw data while preserving the rest. Current adversarial perturbation methods are used for data poisoning with minimal perturbations of the raw data such that the machine learning model's performance is adversely impacted while the human vision cannot perceive the difference in the poisoned dataset due to minimal nature of perturbations. We instead apply relatively maximal perturbations of raw data to conditionally damage model's classification of one attribute while preserving the model performance over another attribute. In addition, the maximal nature of perturbation helps adversely impact human perception in classifying hidden attribute apart from impacting model performance. We validate our result qualitatively by showing the obfuscated dataset and quantitatively by showing the inability of models trained on clean data to predict the hidden attribute from the perturbed dataset while being able to predict the rest of attributes.

READ FULL TEXT
research
04/08/2023

Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack

Deep learning models can be fooled by small l_p-norm adversarial perturb...
research
10/15/2020

Adversarial Images through Stega Glasses

This paper explores the connection between steganography and adversarial...
research
03/09/2023

Learning the Legibility of Visual Text Perturbations

Many adversarial attacks in NLP perturb inputs to produce visually simil...
research
02/23/2021

The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations

Authorship analysis is an important subject in the field of natural lang...
research
02/16/2021

Just Noticeable Difference for Machine Perception and Generation of Regularized Adversarial Images with Minimal Perturbation

In this study, we introduce a measure for machine perception, inspired b...
research
10/03/2020

Multi-Step Adversarial Perturbations on Recommender Systems Embeddings

Recommender systems (RSs) have attained exceptional performance in learn...
research
05/23/2018

Anonymizing k-Facial Attributes via Adversarial Perturbations

A face image not only provides details about the identity of a subject b...

Please sign up or login with your details

Forgot password? Click here to reset