TensorShield: Tensor-based Defense Against Adversarial Attacks on Images

02/18/2020
by   Negin Entezari, et al.
0

Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images which can significantly discard high-frequency perturbations. Recently a defense framework called Shield could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from Shield by 14 maintaining comparable speed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2022

LPF-Defense: 3D Adversarial Defense based on Frequency Analysis

Although 3D point cloud classification has recently been widely deployed...
research
09/03/2023

Robust Adversarial Defense by Tensor Factorization

As machine learning techniques become increasingly prevalent in data ana...
research
02/27/2019

Disentangled Deep Autoencoding Regularization for Robust Image Classification

In spite of achieving revolutionary successes in machine learning, deep ...
research
07/03/2018

Local Gradients Smoothing: Defense against localized adversarial attacks

Deep neural networks (DNNs) have shown vulnerability to adversarial atta...
research
05/08/2017

Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression

Deep neural networks (DNNs) have achieved great success in solving a var...
research
08/06/2018

Defense Against Adversarial Attacks with Saak Transform

Deep neural networks (DNNs) are known to be vulnerable to adversarial pe...
research
08/06/2019

BlurNet: Defense by Filtering the Feature Maps

Recently, the field of adversarial machine learning has been garnering a...

Please sign up or login with your details

Forgot password? Click here to reset