Fine-grained Synthesis of Unrestricted Adversarial Examples

11/20/2019
by   Omid Poursaeed, et al.
12

We propose a novel approach for generating unrestricted adversarial examples by manipulating fine-grained aspects of image generation. Unlike existing unrestricted attacks that typically hand-craft geometric transformations, we learn stylistic and stochastic modifications leveraging state-of-the-art generative models. This allows us to manipulate an image in a controlled, fine-grained manner without being bounded by a norm threshold. Our model can be used for both targeted and non-targeted unrestricted attacks. We demonstrate that our attacks can bypass certified defenses, yet our adversarial images look indistinguishable from natural images as verified by human evaluation. Adversarial training can be used as an effective defense without degrading performance of the model on clean images. We perform experiments on LSUN and CelebA-HQ as high resolution datasets to validate efficacy of our proposed approach.

READ FULL TEXT

page 6

page 7

page 9

page 10

page 11

page 12

research
05/18/2023

Content-based Unrestricted Adversarial Attack

Unrestricted adversarial attacks typically manipulate the semantic conte...
research
04/17/2019

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

Deep neural networks have been shown to exhibit an intriguing vulnerabil...
research
06/05/2020

Adversarial Image Generation and Training for Deep Convolutional Neural Networks

Deep convolutional neural networks (DCNNs) have achieved great success i...
research
10/19/2021

Fine-Grained Control of Artistic Styles in Image Generation

Recent advances in generative models and adversarial training have enabl...
research
06/19/2021

A Stealthy and Robust Fingerprinting Scheme for Generative Models

This paper presents a novel fingerprinting methodology for the Intellect...
research
06/17/2020

Adversarial Defense by Latent Style Transformations

Machine learning models have demonstrated vulnerability to adversarial a...
research
10/15/2019

Understanding Misclassifications by Attributes

In this paper, we aim to understand and explain the decisions of deep ne...

Please sign up or login with your details

Forgot password? Click here to reset