Comparative Evaluation of Recent Universal Adversarial Perturbations in Image Classification

06/20/2023
by   Juanjuan Weng, et al.
0

The vulnerability of Convolutional Neural Networks (CNNs) to adversarial samples has recently garnered significant attention in the machine learning community. Furthermore, recent studies have unveiled the existence of universal adversarial perturbations (UAPs) that are image-agnostic and highly transferable across different CNN models. In this survey, our primary focus revolves around the recent advancements in UAPs specifically within the image classification task. We categorize UAPs into two distinct categories, i.e., noise-based attacks and generator-based attacks, thereby providing a comprehensive overview of representative methods within each category. By presenting the computational details of these methods, we summarize various loss functions employed for learning UAPs. Furthermore, we conduct a comprehensive evaluation of different loss functions within consistent training frameworks, including noise-based and generator-based. The evaluation covers a wide range of attack settings, including black-box and white-box attacks, targeted and untargeted attacks, as well as the examination of defense mechanisms. Our quantitative evaluation results yield several important findings pertaining to the effectiveness of different loss functions, the selection of surrogate CNN models, the impact of training data and data size, and the training frameworks involved in crafting universal attackers. Finally, to further promote future research on universal adversarial attacks, we provide some visualizations of the perturbations and discuss the potential research directions.

READ FULL TEXT

page 15

page 16

research
03/02/2021

A Survey On Universal Adversarial Attack

Deep neural networks (DNNs) have demonstrated remarkable performance for...
research
03/11/2020

Frequency-Tuned Universal Adversarial Attacks

Researchers have shown that the predictions of a convolutional neural ne...
research
04/24/2020

One Sparse Perturbation to Fool them All, almost Always!

Constructing adversarial perturbations for deep neural networks is an im...
research
07/09/2018

Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks

Recently, there have been several successful deep learning approaches fo...
research
07/11/2018

With Friends Like These, Who Needs Adversaries?

The vulnerability of deep image classification networks to adversarial a...
research
01/24/2018

Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations

Machine learning models are susceptible to adversarial perturbations: sm...
research
07/10/2020

Miss the Point: Targeted Adversarial Attack on Multiple Landmark Detection

Recent methods in multiple landmark detection based on deep convolutiona...

Please sign up or login with your details

Forgot password? Click here to reset