Multi-Task Adversarial Attack

11/19/2020
by   Pengxin Guo, et al.
0

Deep neural networks have achieved impressive performance in various areas, but they are shown to be vulnerable to adversarial attacks. Previous works on adversarial attacks mainly focused on the single-task setting. However, in real applications, it is often desirable to attack several models for different tasks simultaneously. To this end, we propose Multi-Task adversarial Attack (MTA), a unified framework that can craft adversarial examples for multiple tasks efficiently by leveraging shared knowledge among tasks, which helps enable large-scale applications of adversarial attacks on real-world systems. More specifically, MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders. Thanks to the shared encoder, MTA reduces the storage cost and speeds up the inference when attacking multiple tasks simultaneously. Moreover, the proposed framework can be used to generate per-instance and universal perturbations for targeted and non-targeted attacks. Experimental results on the Office-31 and NYUv2 datasets demonstrate that MTA can improve the quality of attacks when compared with its single-task counterpart.

READ FULL TEXT

page 7

page 8

research
10/07/2020

Double Targeted Universal Adversarial Perturbations

Despite their impressive performance, deep neural networks (DNNs) are wi...
research
02/25/2023

Scalable Attribution of Adversarial Attacks via Multi-Task Learning

Deep neural networks (DNNs) can be easily fooled by adversarial attacks ...
research
08/29/2019

Universal, transferable and targeted adversarial attacks

Deep Neural Network has been found vulnerable in many previous works. A ...
research
05/26/2019

Generalizable Adversarial Attacks Using Generative Models

Adversarial attacks on deep neural networks traditionally rely on a cons...
research
05/28/2022

Contributor-Aware Defenses Against Adversarial Backdoor Attacks

Deep neural networks for image classification are well-known to be vulne...
research
05/27/2018

Defending Against Adversarial Attacks by Leveraging an Entire GAN

Recent work has shown that state-of-the-art models are highly vulnerable...
research
10/07/2021

One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features

It is well understood that modern deep networks are vulnerable to advers...

Please sign up or login with your details

Forgot password? Click here to reset