CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network

11/03/2018
by   Xiaoyi Dong, et al.
0

In this paper, we propose an improvement of Adversarial Transformation Networks(ATN) to generate adversarial examples, which can fool white-box models and black-box models with a state of the art performance and won the 2rd place in the non-target task in CAAD 2018.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2017

Machine Learning as an Adversarial Service: Learning Black-Box Adversarial Examples

Neural networks are known to be vulnerable to adversarial examples, inpu...
research
03/07/2022

Art-Attack: Black-Box Adversarial Attack via Evolutionary Art

Deep neural networks (DNNs) have achieved state-of-the-art performance i...
research
11/06/2017

Whitening Black-Box Neural Networks

Many deployed learned models are black boxes: given input, returns outpu...
research
08/25/2020

Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning

Transfer learning has become a common practice for training deep learnin...
research
07/21/2020

Towards Visual Distortion in Black-Box Attacks

Constructing adversarial examples in a black-box threat model injures th...
research
10/30/2020

Leveraging Extracted Model Adversaries for Improved Black Box Attacks

We present a method for adversarial input generation against black box m...
research
07/13/2023

Towards Traitor Tracing in Black-and-White-Box DNN Watermarking with Tardos-based Codes

The growing popularity of Deep Neural Networks, which often require comp...

Please sign up or login with your details

Forgot password? Click here to reset