Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once

08/14/2019
by   Jiangfan Han, et al.
9

Modern deep neural networks are often vulnerable to adversarial samples. Based on the first optimization-based attacking method, many following methods are proposed to improve the attacking performance and speed. Recently, generation-based methods have received much attention since they directly use feed-forward networks to generate the adversarial samples, which avoid the time-consuming iterative attacking procedure in optimization-based and gradient-based methods. However, current generation-based methods are only able to attack one specific target (category) within one model, thus making them not applicable to real classification systems that often have hundreds/thousands of categories. In this paper, we propose the first Multi-target Adversarial Network (MAN), which can generate multi-target adversarial samples with a single model. By incorporating the specified category information into the intermediate features, it can attack any category of the target classification model during runtime. Experiments show that the proposed MAN can produce stronger attack results and also have better transferability than previous state-of-the-art methods in both multi-target attack task and single-target attack task. We further use the adversarial samples generated by our MAN to improve the robustness of the classification model. It can also achieve better classification accuracy than other methods when attacked by various methods.

READ FULL TEXT

page 3

page 4

research
08/16/2021

Deep adversarial attack

Target...
research
11/01/2020

LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks

Deep neural networks have made tremendous progress in 3D point-cloud rec...
research
05/05/2023

White-Box Multi-Objective Adversarial Attack on Dialogue Generation

Pre-trained transformers are popular in state-of-the-art dialogue genera...
research
12/16/2018

Trust Region Based Adversarial Attack on Neural Networks

Deep Neural Networks are quite vulnerable to adversarial perturbations. ...
research
08/13/2021

CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue

Overconfident predictions on out-of-distribution (OOD) samples is a thor...
research
10/10/2022

Investigation of inverse design of multilayer thin-films with conditional invertible Neural Networks

The task of designing optical multilayer thin-films regarding a given ta...
research
12/30/2019

Adversarial Example Generation using Evolutionary Multi-objective Optimization

This paper proposes Evolutionary Multi-objective Optimization (EMO)-base...

Please sign up or login with your details

Forgot password? Click here to reset