Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks

03/04/2021
by   Zhiqun Zhao, et al.
0

Deep neural networks recognize objects by analyzing local image details and summarizing their information along the inference layers to derive the final decision. Because of this, they are prone to adversarial attacks. Small sophisticated noise in the input images can accumulate along the network inference path and produce wrong decisions at the network output. On the other hand, human eyes recognize objects based on their global structure and semantic cues, instead of local image textures. Because of this, human eyes can still clearly recognize objects from images which have been heavily damaged by adversarial attacks. This leads to a very interesting approach for defending deep neural networks against adversarial attacks. In this work, we propose to develop a structure-preserving progressive low-rank image completion (SPLIC) method to remove unneeded texture details from the input images and shift the bias of deep neural networks towards global object structures and semantic cues. We formulate the problem into a low-rank matrix completion problem with progressively smoothed rank functions to avoid local minimums during the optimization process. Our experimental results demonstrate that the proposed method is able to successfully remove the insignificant local image details while preserving important global object structures. On black-box, gray-box, and white-box attacks, our method outperforms existing defense methods (by up to 12.6

READ FULL TEXT

page 1

page 5

page 6

page 8

research
04/23/2020

Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks

Effective defense of deep neural networks against adversarial attacks re...
research
05/28/2019

ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation

Deep neural networks are vulnerable to adversarial attacks. The literatu...
research
09/18/2023

Efficient Low-Rank GNN Defense Against Structural Attacks

Graph Neural Networks (GNNs) have been shown to possess strong represent...
research
09/17/2020

Online Alternate Generator against Adversarial Attacks

The field of computer vision has witnessed phenomenal progress in recent...
research
04/25/2018

Progressive Neural Networks for Image Classification

The inference structures and computational complexity of existing deep n...
research
05/31/2018

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

Deep learning systems have become ubiquitous in many aspects of our live...
research
02/27/2023

GLOW: Global Layout Aware Attacks for Object Detection

Adversarial attacks aims to perturb images such that a predictor outputs...

Please sign up or login with your details

Forgot password? Click here to reset