Adversarial Images through Stega Glasses

10/15/2020
by   Benoît Bonnet, et al.
0

This paper explores the connection between steganography and adversarial images. On the one hand, ste-ganalysis helps in detecting adversarial perturbations. On the other hand, steganography helps in forging adversarial perturbations that are not only invisible to the human eye but also statistically undetectable. This work explains how to use these information hiding tools for attacking or defending computer vision image classification. We play this cat and mouse game with state-of-art classifiers, steganalyzers, and steganographic embedding schemes. It turns out that steganography helps more the attacker than the defender.

READ FULL TEXT
research
12/19/2019

Adversarial Perturbations on the Perceptual Ball

We present a simple regularisation of Adversarial Perturbations based up...
research
12/08/2020

Locally optimal detection of stochastic targeted universal adversarial perturbations

Deep learning image classifiers are known to be vulnerable to small adve...
research
09/27/2019

Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest

In this paper we investigate the usage of adversarial perturbations for ...
research
09/20/2018

Playing the Game of Universal Adversarial Perturbations

We study the problem of learning classifiers robust to universal adversa...
research
08/01/2016

Early Methods for Detecting Adversarial Images

Many machine learning classifiers are vulnerable to adversarial perturba...
research
06/03/2022

Evaluating Transfer-based Targeted Adversarial Perturbations against Real-World Computer Vision Systems based on Human Judgments

Computer vision systems are remarkably vulnerable to adversarial perturb...
research
03/23/2018

Detecting Adversarial Perturbations with Saliency

In this paper we propose a novel method for detecting adversarial exampl...

Please sign up or login with your details

Forgot password? Click here to reset