PRAT: PRofiling Adversarial aTtacks

09/20/2023
by   Rahul Ambati, et al.
0

Intrinsic susceptibility of deep learning to adversarial examples has led to a plethora of attack techniques with a broad common objective of fooling deep models. However, we find slight compositional differences between the algorithms achieving this objective. These differences leave traces that provide important clues for attacker profiling in real-life scenarios. Inspired by this, we introduce a novel problem of PRofiling Adversarial aTtacks (PRAT). Given an adversarial example, the objective of PRAT is to identify the attack used to generate it. Under this perspective, we can systematically group existing attacks into different families, leading to the sub-problem of attack family identification, which we also study. To enable PRAT analysis, we introduce a large Adversarial Identification Dataset (AID), comprising over 180k adversarial samples generated with 13 popular attacks for image specific/agnostic white/black box setups. We use AID to devise a novel framework for the PRAT objective. Our framework utilizes a Transformer based Global-LOcal Feature (GLOF) module to extract an approximate signature of the adversarial attack, which in turn is used for the identification of the attack. Using AID and our framework, we provide multiple interesting benchmark results for the PRAT problem.

READ FULL TEXT

page 5

page 8

research
10/16/2021

Adversarial Attacks on Gaussian Process Bandits

Gaussian processes (GP) are a widely-adopted tool used to sequentially o...
research
06/14/2020

Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems

Adversarial attacks pose significant challenges for detecting adversaria...
research
12/02/2021

Adversarial Robustness of Deep Reinforcement Learning based Dynamic Recommender Systems

Adversarial attacks, e.g., adversarial perturbations of the input and ad...
research
06/26/2020

Orthogonal Deep Models As Defense Against Black-Box Attacks

Deep learning has demonstrated state-of-the-art performance for a variet...
research
06/20/2023

Reversible Adversarial Examples with Beam Search Attack and Grayscale Invariance

Reversible adversarial examples (RAE) combine adversarial attacks and re...
research
01/06/2023

Adversarial Attacks on Neural Models of Code via Code Difference Reduction

Deep learning has been widely used to solve various code-based tasks by ...
research
11/16/2019

Suspicion-Free Adversarial Attacks on Clustering Algorithms

Clustering algorithms are used in a large number of applications and pla...

Please sign up or login with your details

Forgot password? Click here to reset