Scalable Attribution of Adversarial Attacks via Multi-Task Learning

02/25/2023
by   Zhongyi Guo, et al.
0

Deep neural networks (DNNs) can be easily fooled by adversarial attacks during inference phase when attackers add imperceptible perturbations to original examples, i.e., adversarial examples. Many works focus on adversarial detection and adversarial training to defend against adversarial attacks. However, few works explore the tool-chains behind adversarial examples, which can help defenders to seize the clues about the originator of the attack, their goals, and provide insight into the most effective defense algorithm against corresponding attacks. With such a gap, it is necessary to develop techniques that can recognize tool-chains that are leveraged to generate the adversarial examples, which is called Adversarial Attribution Problem (AAP). In this paper, AAP is defined as the recognition of three signatures, i.e., attack algorithm, victim model and hyperparameter. Current works transfer AAP into single label classification task and ignore the relationship between these signatures. The former will meet combination explosion problem as the number of signatures is increasing. The latter dictates that we cannot treat AAP simply as a single task problem. We first conduct some experiments to validate the attributability of adversarial examples. Furthermore, we propose a multi-task learning framework named Multi-Task Adversarial Attribution (MTAA) to recognize the three signatures simultaneously. MTAA contains perturbation extraction module, adversarial-only extraction module and classification and regression module. It takes the relationship between attack algorithm and corresponding hyperparameter into account and uses the uncertainty weighted loss to adjust the weights of three recognition tasks. The experimental results on MNIST and ImageNet show the feasibility and scalability of the proposed framework as well as its effectiveness in dealing with false alarms.

READ FULL TEXT

page 6

page 7

page 8

research
11/19/2020

Multi-Task Adversarial Attack

Deep neural networks have achieved impressive performance in various are...
research
08/20/2021

AdvDrop: Adversarial Attack to DNNs by Dropping Information

Human can easily recognize visual objects with lost information: even lo...
research
12/20/2022

Multi-head Uncertainty Inference for Adversarial Attack Detection

Deep neural networks (DNNs) are sensitive and susceptible to tiny pertur...
research
01/31/2023

Reverse engineering adversarial attacks with fingerprints from adversarial examples

In spite of intense research efforts, deep neural networks remain vulner...
research
04/21/2021

MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training

This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackat...
research
05/20/2023

Dynamic Gradient Balancing for Enhanced Adversarial Attacks on Multi-Task Models

Multi-task learning (MTL) creates a single machine learning model called...
research
04/02/2022

SkeleVision: Towards Adversarial Resiliency of Person Tracking with Multi-Task Learning

Person tracking using computer vision techniques has wide ranging applic...

Please sign up or login with your details

Forgot password? Click here to reset