Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

03/14/2018
by   Zihao Liu, et al.
0

Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that well-trained DNNs can be easily misled by adversarial examples (AE) -- the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from expensive retraining cost and demonstrate marginal robustness improvement against the state-of-the-art attacks like CW family adversarial examples. In this work, we propose a novel low-cost "feature distillation" strategy to purify the adversarial input perturbations of AEs by redesigning the popular image compression framework "JPEG". The proposed "feature distillation" wisely maximizes the malicious feature loss of AE perturbations during image compression while suppressing the distortions of benign features essential for high accurate DNN classification. Experimental results show that our method can drastically reduce the success rate of various state-of-the-art AE attacks by 60 harming the testing accuracy, outperforming existing solutions like default JPEG compression and "feature squeezing".

READ FULL TEXT
research
07/14/2016

Defensive Distillation is Not Robust to Adversarial Examples

We show that defensive distillation is not secure: it is no more resista...
research
04/04/2017

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

Although deep neural networks (DNNs) have achieved great success in many...
research
05/08/2023

Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization

Deep Neural Networks (DNNs) have recently made significant progress in m...
research
11/05/2019

DLA: Dense-Layer-Analysis for Adversarial Example Detection

In recent years Deep Neural Networks (DNNs) have achieved remarkable res...
research
08/31/2023

Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff

This paper addresses the tradeoff between standard accuracy on clean exa...
research
11/14/2015

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

Deep learning algorithms have been shown to perform extremely well on ma...
research
08/15/2023

SEDA: Self-Ensembling ViT with Defensive Distillation and Adversarial Training for robust Chest X-rays Classification

Deep Learning methods have recently seen increased adoption in medical i...

Please sign up or login with your details

Forgot password? Click here to reset