Adversarial Attacks with Time-Scale Representations

07/26/2021
by   Alberto Santamaria-Pang, et al.
0

We propose a novel framework for real-time black-box universal attacks which disrupts activations of early convolutional layers in deep learning models. Our hypothesis is that perturbations produced in the wavelet space disrupt early convolutional layers more effectively than perturbations performed in the time domain. The main challenge in adversarial attacks is to preserve low frequency image content while minimally changing the most meaningful high frequency content. To address this, we formulate an optimization problem using time-scale (wavelet) representations as a dual space in three steps. First, we project original images into orthonormal sub-spaces for low and high scales via wavelet coefficients. Second, we perturb wavelet coefficients for high scale projection using a generator network. Third, we generate new adversarial images by projecting back the original coefficients from the low scale and the perturbed coefficients from the high scale sub-space. We provide a theoretical framework that guarantees a dual mapping from time and time-scale domain representations. We compare our results with state-of-the-art black-box attacks from generative-based and gradient-based models. We also verify efficacy against multiple defense methods such as JPEG compression, Guided Denoiser and Comdefend. Our results show that wavelet-based perturbations consistently outperform time-based attacks thus providing new insights into vulnerabilities of deep learning models and could potentially lead to robust architectures or new defense and attack mechanisms by leveraging time-scale representations.

READ FULL TEXT

page 5

page 7

page 8

research
11/03/2022

Data-free Defense of Black Box Models Against Adversarial Attacks

Several companies often safeguard their trained deep models (i.e. detail...
research
07/28/2021

Detecting AutoAttack Perturbations in the Frequency Domain

Recently, adversarial attacks on image classification networks by the Au...
research
09/24/2018

Low Frequency Adversarial Perturbation

Recently, machine learning security has received significant attention. ...
research
06/26/2020

Orthogonal Deep Models As Defense Against Black-Box Attacks

Deep learning has demonstrated state-of-the-art performance for a variet...
research
01/12/2022

Adversarially Robust Classification by Conditional Generative Model Inversion

Most adversarial attack defense methods rely on obfuscating gradients. T...
research
02/21/2021

A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization

We consider the zeroth-order optimization problem in the huge-scale sett...
research
03/28/2018

Defending against Adversarial Images using Basis Functions Transformations

We study the effectiveness of various approaches that defend against adv...

Please sign up or login with your details

Forgot password? Click here to reset