MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation

10/25/2019
by   Panagiotis Linardos, et al.
0

This work tackles the Pixel Privacy task put forth by MediaEval 2019. Our goal is to manipulate images in a way that conceals them from automatic scene classifiers while preserving the original image quality. We use the fast gradient sign method, which normally has a corrupting influence on image appeal, and devise two methods to minimize the damage. The first approach uses a map of pixel locations that are either salient or flat, and directs perturbations away from them. The second approach subtracts the gradient of an aesthetics evaluation model from the gradient of the attack model to guide the perturbations towards a direction that preserves appeal. We make our code available at: https://git.io/JesXr.

READ FULL TEXT

page 1

page 2

research
10/25/2021

Fast Gradient Non-sign Methods

Adversarial attacks make their success in fooling DNNs and among them, g...
research
04/12/2019

Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients

We focus our attention on the problem of generating adversarial perturba...
research
04/06/2023

RoSteALS: Robust Steganography using Autoencoder Latent Space

Data hiding such as steganography and invisible watermarking has importa...
research
07/19/2020

Exploiting vulnerabilities of deep neural networks for privacy protection

Adversarial perturbations can be added to images to protect their conten...
research
10/27/2019

EdgeFool: An Adversarial Image Enhancement Filter

Adversarial examples are intentionally perturbed images that mislead cla...
research
07/04/2017

UPSET and ANGRI : Breaking High Performance Image Classifiers

In this paper, targeted fooling of high performance image classifiers is...
research
06/05/2023

Stable Diffusion is Unstable

Recently, text-to-image models have been thriving. Despite their powerfu...

Please sign up or login with your details

Forgot password? Click here to reset