Adversarial Attack with Pattern Replacement

11/25/2019
by   Ziang Dong, et al.
0

We propose a generative model for adversarial attack. The model generates subtle but predictive patterns from the input. To perform an attack, it replaces the patterns of the input with those generated based on examples from some other class. We demonstrate our model by attacking CNN on MNIST.

READ FULL TEXT
research
08/16/2021

Deep adversarial attack

Target...
research
05/21/2018

Generative Adversarial Examples

Adversarial examples are typically constructed by perturbing an existing...
research
03/08/2017

Tactics of Adversarial Attack on Deep Reinforcement Learning Agents

We introduce two tactics to attack agents trained by deep reinforcement ...
research
10/26/2021

Semantic Host-free Trojan Attack

In this paper, we propose a novel host-free Trojan attack with triggers ...
research
01/18/2021

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

Deep neural networks (DNNs) have been widely adopted in different applic...
research
06/05/2023

Stable Diffusion is Unstable

Recently, text-to-image models have been thriving. Despite their powerfu...
research
09/19/2023

Model Leeching: An Extraction Attack Targeting LLMs

Model Leeching is a novel extraction attack targeting Large Language Mod...

Please sign up or login with your details

Forgot password? Click here to reset