Universal Adversarial Perturbations and Image Spam Classifiers

03/07/2021
by   Andy Phung, et al.
0

As the name suggests, image spam is spam email that has been embedded in an image. Image spam was developed in an effort to evade text-based filters. Modern deep learning-based classifiers perform well in detecting typical image spam that is seen in the wild. In this chapter, we evaluate numerous adversarial techniques for the purpose of attacking deep learning-based image spam classifiers. Of the techniques tested, we find that universal perturbation performs best. Using universal adversarial perturbations, we propose and analyze a new transformation-based adversarial attack that enables us to create tailored "natural perturbations" in image spam. The resulting spam images benefit from both the presence of concentrated natural features and a universal adversarial perturbation. We show that the proposed technique outperforms existing adversarial attacks in terms of accuracy reduction, computation time per example, and perturbation distance. We apply our technique to create a dataset of adversarial spam images, which can serve as a challenge dataset for future research in image spam detection.

READ FULL TEXT
research
03/02/2021

A Survey On Universal Adversarial Attack

Deep neural networks (DNNs) have demonstrated remarkable performance for...
research
04/17/2020

Adversarial Attack on Deep Learning-Based Splice Localization

Regarding image forensics, researchers have proposed various approaches ...
research
09/15/2021

Universal Adversarial Attack on Deep Learning Based Prognostics

Deep learning-based time series models are being extensively utilized in...
research
05/30/2019

Identifying Classes Susceptible to Adversarial Attacks

Despite numerous attempts to defend deep learning based image classifier...
research
12/09/2021

Amicable Aid: Turning Adversarial Attack to Benefit Classification

While adversarial attacks on deep image classification models pose serio...
research
03/09/2023

Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation

The vulnerability of deep neural networks to adversarial examples has le...
research
02/12/2021

Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective

The booming interest in adversarial attacks stems from a misalignment be...

Please sign up or login with your details

Forgot password? Click here to reset