Towards Deep Neural Network Architectures Robust to Adversarial Examples

12/11/2014
by   Shixiang Gu, et al.
0

Recent work has shown deep neural networks (DNNs) to be highly susceptible to well-designed, small perturbations at the input layer, or so-called adversarial examples. Taking images as an example, such distortions are often imperceptible, but can result in 100 DNN. We study the structure of adversarial examples and explore network topology, pre-processing and training strategies to improve the robustness of DNNs. We perform various experiments to assess the removability of adversarial examples by corrupting with additional noise and pre-processing with denoising autoencoders (DAEs). We find that DAEs can remove substantial amounts of the adversarial noise. How- ever, when stacking the DAE with the original DNN, the resulting network can again be attacked by new adversarial examples with even smaller distortion. As a solution, we propose Deep Contractive Network, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE). This increases the network robustness to adversarial examples, without a significant performance penalty.

READ FULL TEXT

page 5

page 8

research
05/28/2023

Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness

Recent works found that deep neural networks (DNNs) can be fooled by adv...
research
01/01/2019

A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks

Deep neural networks (DNNs) have been widely used in the fields such as ...
research
10/29/2021

ε-weakened Robustness of Deep Neural Networks

This paper introduces a notation of ε-weakened robustness for analyzing ...
research
11/27/2019

Can Attention Masks Improve Adversarial Robustness?

Deep Neural Networks (DNNs) are known to be susceptible to adversarial e...
research
10/09/2018

Analyzing the Noise Robustness of Deep Neural Networks

Deep neural networks (DNNs) are vulnerable to maliciously generated adve...
research
10/25/2017

mixup: Beyond Empirical Risk Minimization

Large deep neural networks are powerful, but exhibit undesirable behavio...
research
07/08/2017

Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks

Deep neural networks (DNNs) have excellent representative power and are ...

Please sign up or login with your details

Forgot password? Click here to reset